Hi,
where can I find the source for the VideoCapture#grab implementation?
I'd like to know if I can encoded frames too, instead of having to retrieve() a decoded version.
Why: I need to store a certain, large, number of frames (/question/112096/videocapture-frames-videocapture-possible/) and read them one by one. I can't find a way to simply put specific frames of a VideoCapture into another VideoCapture, and I don't want to store them to files first to only read them a bit later.
↧
VideoCapture#grab Source
↧
Videocapture suddenly blocks on reading a frame from IP camera
I'm using OpenCV 3. Grabbing a frame using VideoCapture with an IP Camera is blocking if the camera goes disconnected from the network or there is an issue with a frame. I first check if videoCapture.isOpened(). If it is, I tried these methods but nothing seems to work:
**1) grabber >> frame**
if(grabber.isOpened()) {
grabber >> frame;
// DO SOMETHING WITH FRAME
}
**2) read**
if(grabber.isOpened()) {
if(!grabber.grab()){
cout << "failed to grab from camera" << endl;
} else {
if (grabber.retrieve(frame,0) ){
// DO SOMETHING WITH FRAME
} else {
// SHOW ERROR
}
}
}
**3) grab/retrieve**
if(grabber.isOpened()) {
if ( !grabber.read(frame) ) {
cout << "Unable to retrieve frame from video stream." << endl;
}
else {
// DO SOMETHING WITH FRAME
}
}
The video stream gets stuck at some point grabbing a frame with all of the previous options, each one blocks but doesn't exit or returns any error.
Do you know if there is a way to handle or solve this? Maybe some validations, try/catch or timer?
↧
↧
RTSP communication problem
Hi,
It is my first question in this forum, after looking for a long time any solution.
I'm using python 2.7 with OPENCV '2.4.13' (I already tried with 3.1) and I can't open streams. I already solved the ffmpeg problem (dll) and tried to run the local camera and after a local video with success.
could anyone help me? follow below code:
PS: Windows 10, x86
rtsp link working (tried in the vlc player)
ffmpeg working (tried to run a video in the code locally)
import cv2, platform
#import numpy as np
cam = "http://192.168.11.146:81/videostream.cgi?rate=0&user=admin&pwd=888888"
#cam = 0 # Use local webcam.
cap = cv2.VideoCapture(cam)
if not cap.isOpened():
print("not opened")
while(True):
# Capture frame-by-frame
ret, current_frame = cap.read()
if type(current_frame) == type(None):
print("!!! Couldn't read frame!")
break
# Display the resulting frame
cv2.imshow('frame',current_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# release the capture
cap.release()
cv2.destroyAllWindows()
When I try to run I get back
C:\Python27\python.exe C:/Users/brleme/PycharmProjects/opencv/main.py
not opened
!!! Couldn't read frame!
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:545)
Process finished with exit code 0
image of the camera working in the browser

kind regards,
Bruno
↧
Videocapture grab function does not return (blocking call) when IP camera is disconnected.
The problem which I am facing is that the grab function of Videocapture object **gets stuck** when my IP cam is disconnected. Ideally, the grab() function or the isOpened() function should return false on IP cam failure after some timeout but instead function isOpened() returns true and grab() doesn't return anything (**stuck at grabbing**).
I am using python2.7 interpreter and I have tried this on **ubuntu 14.04** and **windows 10** with prebuilt opencv package, manually **compiled opencv 2.4.13 and 3.1.0 with ffmpeg**. Same result on all the versions.
I searched on different forums and dug into the source files to pinpoint the line, function and the file where the grab call is getting stuck. So the function **CvCapture_FFMPEG::grabFrame()** (*sources/modules/highgui/src/cap_ffmpeg_impl.h*) is stuck in while loop when IP camera is disconnected and somehow the timeout interrupt is not working in this case. I have checked this by adding log statements at different points in the CvCapture_FFMPEG::grabFrame() function.
So is this the problem with ffmpeg libraries or the CvCapture_FFMPEG::grabFrame() function or am I missing something?
Note: This **problem does not occur** when I compile **OpenCV with gstreamer** (either 0.10 or 1.0). There grab() function returns false on IP camera disconnect. I have tested this on opencv 2 and 3 on ubuntu 14.04 and windows 10.
↧
During startap, gives an error in console
Hi.
I wrote the program code. The compliler does not give error, but the console throws the following error:
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_M_construct null not valid
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
Here is my program code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include
#include
using namespace std;
using namespace cv;
int g_slider_position = 0;
int g_run = 1, g_dontset = 0;
//start out in single step mode
VideoCapture g_cap;
void onTrackbarSlide( int pos, void *) {
g_cap.set( CV_CAP_PROP_POS_FRAMES, pos );
if( !g_dontset )
g_run = 1;
g_dontset = 0;
}
int main( int argc, char** argv )
{
namedWindow( "Example2_4", WINDOW_AUTOSIZE );
g_cap.open( string(argv[1]) );
int frames = (int) g_cap.get(CV_CAP_PROP_FRAME_COUNT);
int tmpw = (int) g_cap.get(CV_CAP_PROP_FRAME_WIDTH);
int tmph = (int) g_cap.get(CV_CAP_PROP_FRAME_HEIGHT);
cout << "Video has " << frames << " frames of dimensions(" << tmpw << ", " << tmph << ")." << endl;
createTrackbar("Position", "Example2_4", &g_slider_position, frames, onTrackbarSlide);
Mat frame;
while(1) {
if( g_run != 0 ) {
g_cap >> frame;
if(!frame.data) break;
int current_pos = (int)g_cap.get(CV_CAP_PROP_POS_FRAMES);
g_dontset = 1;
setTrackbarPos("Position", "Example2_4", current_pos);
imshow( "Example2_4", frame );
g_run-=1;
}
char c = (char) waitKey(10);
if(c == 's') // single step
{g_run = 1;
cout << "Single step, run = " << g_run << endl;
}
if(c == 'r') // run mode
{g_run = -1;
cout << "Run mode, run = " << g_run <
↧
↧
cvQueryFrame returns rotated image
I am using a MacBook Pro OS X version 10.12.1 with an external USB Web Camera. If I look at an image in photo booth with the external web camera the image is correct. However when I capture frames from the same web cam and display it, the image shows up rotated 90 degrees clockwise. What am I doing wrong ?
Here is the code :
CvCapture *capture = 0;
capture = cvCaptureFromCAM(1); // 1 = web cam , -1 = autodetect , 0 = default
if (!capture)
{
fprintf(stderr, "!!! Cannot open initialize webcam!\n" );
return;
}
// Create a window for the video
cvNamedWindow("Frame Captured", CV_WINDOW_AUTOSIZE);
IplImage* frame = 0;
char key = 0;
while (key != 27) // ESC
{
frame = cvQueryFrame(capture);
if(!frame)
{
fprintf( stderr, "!!! cvQueryFrame failed!\n" );
break;
}
// Display frame
cvShowImage("Frame Captured", frame);
// Exit when user press ESC
key = cvWaitKey(10);
}
// Free memory
cvDestroyWindow("result");
cvReleaseCapture(&capture);
}
↧
libraries list / locator ?
For whatever reason none of the OpenCV sample codes identifies libraries to link to.
This seems to be a common beginner, like me , issue leading to "undefined symbols" and frustrating search for library to link to.
For example - **what is a recommended way to find the library for VideoCapture?**
PS I am using GCC in Eclipse and the linked libraries seems to be of "static" type. Not really sure how to tell the linker to use dynamic library, or if it is even an option. Seems that with CPU's access to GBs of memories using DLL or painstakingly options static libraries is rather anachronistic. Just saying.
.
↧
Writting a videocapture driver
Hi,
I need to write my own class videocapture (Windows 10 VC 2015 and opencv 3.1-dev).
I write somethin like this :
class MyClass : public VideoCapture {
....
Now I want to write retrieve method I wrote :
bool MyClass ::retrieve(OutputArray image, int flag )
{
image.create(288, 384, CV_16UC1);
if (!image.isContinuous())
{
return false;
}
char c = 1;
irFile.Write(&c, 1);
int nb=irFile.Read(image.getMat().ptr(0), 384 * 288 * 2);
return nb== 384 * 288 * 2;
}
I have sometime after 5s or 1 min an exception when I run my program. This error occur when I want to copy grabbed frame in a UMat (UMATFRAME)
frame.copyTo(UMATFRAME);
wxOpenCVMain.exe!cv::error(const cv::Exception & exc) Ligne 662 C++
wxOpenCVMain.exe!cv::error(int _code, const cv::String & _err, const char * _func, const char * _file, int _line) Ligne 666 C++
wxOpenCVMain.exe!cv::ocl::OpenCLAllocator::deallocate(cv::UMatData * u) Ligne 4528 C++
wxOpenCVMain.exe!cv::UMat::deallocate() Ligne 410 C++
wxOpenCVMain.exe!cv::UMat::release() Ligne 3493 C++
wxOpenCVMain.exe!cv::UMat::~UMat() Ligne 402 C++
[Code externe]
wxOpenCVMain.exe!cv::error(const cv::Exception & exc) Ligne 662 C++
wxOpenCVMain.exe!cv::error(int _code, const cv::String & _err, const char * _func, const char * _file, int _line) Ligne 666 C++
wxOpenCVMain.exe!cv::ocl::OpenCLAllocator::upload(cv::UMatData * u, const void * srcptr, int dims, const unsigned __int64 * sz, const unsigned __int64 * dstofs, const unsigned __int64 * dststep, const unsigned __int64 * srcstep) Ligne 5036 C++
wxOpenCVMain.exe!cv::Mat::copyTo(const cv::_OutputArray & _dst) Ligne 280 C++
Idea to solve this problem are welcome.
↧
Netcat stream on /dev/stdin not working with OpenCV 3.1.0-dev on Ubuntu 16.04
Hello to everyone.
I m encountered a "strange" problem trying to stream a video using netcat from my Raspberry Pi (using the Pi Camera).
The problem is the following:
If I use Opencv 2.4.13, everything is okay, and I m able to see on my laptop the video coming from the Raspberry with an acceptable latency (~100ms) using simply:
ON RASPBERRY PI:
raspivid -n -t 0 -w 800 -h 600 -fps 30 -o - | nc 145.94.195.184 5000
ON LAPTOP
nc -l -p 5000 | python testVideo.py
Reading from /dev/stdin using the function "VideoCapture"
cap = cv2.VideoCapture("/dev/stdin")
But If I want to use Opencv 3.1.0-dev (installed on another ubuntu 16.04 partition), nothing works. Obtaining the following error:
Unable to stop the stream: Inappropriate ioctl for device
GStreamer: Error opening bin: Unrecoverable syntax error while parsing pipeline /dev/stdin
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /home/vgarofano/opencv/modules/imgproc/src/color.cpp, line 9748
Traceback (most recent call last):
File "testVideo.py", line 22, in
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: /home/vgarofano/opencv/modules/imgproc/src/color.cpp:9748: error: (-215) scn == 3 || scn == 4 in function cvtColor
The code I am using is a simple "read the video and show it":
import numpy as np
import cv2
cap = cv2.VideoCapture("/dev/stdin")
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
How it is possible that the same code works perfectly with OpenCV 2.4.13 while it gives errors using OpenCV 3.1.0-dev?
I think that there is something different with the function VideoCapture, but I am not able to understand how to fix it. Someone that maybe already faced this type of problem could help me? I tried almost everything found on this forum, but nothing changed.
Remarks: I need to use the OpenCV 3.1.0-dev because with opencv_contrib package I want to use the Aruco library (already well imported in Python) to perform some applications detecting markers. Another constraint is that I need it using Python, because I am developing a GUI interface for a ROV using pygame (for other type of applications).
By the way the final goal is to stream from the Raspberry camera, in a local network, with acceptable latency and the possibility to manage the video with OpenCV. So, if you have also some other suggestions on how implement it, could be also also very helpful.
↧
↧
How does the cv2.VideoCapture class work with gstreamer pipelines?
Today I found out that you can use a gstreamer pipeline-string in the constructor of the VideoCapture class (see http://stackoverflow.com/a/23795492), like this:
cv2.VideoCapture("autovideosrc ! appsink")
(in python 2.7, OpenCV 3.1.0, Gstreamer 1.4.3)
And this works like a charm, apart from a few assertions failing for Gstreamer, with no apparent ramifications.
Sadly, I did not find any documentation on this kind of usage. The (! appsink) is mandatory, I assume? Is data format which flows into 'appsink' arbitrary, i.e. as long as it is a valid pipeline, VideoCapture can work with it?
A Disclaimer: Im pretty new with cv2 - in fact today I used it the first time.
↧
Unable to use VideoCapture to access axis m1013 (Java)
Hello,
I am trying to use VideoCapture in Java to access the stream from an Axis m1013. The code does not return me any errors, but it will not affirm that the camera stream has been opened either. I tried using it to grab a single jpeg from the stream, rather than accessing the video, but this seems to not work, either. vc.isopen() does not return true. My code as of right now is:
package camTest;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;
import org.opencv.highgui.VideoCapture;
public class CamTestMain {
public static void main(String[] args) {
VideoCapture vc;
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
vc = new VideoCapture();
try
{
vc.open("http://axis-camera-223-catapult.lan/mjpg/video.mjpg");
System.out.println("no err");
Mat frame = new Mat();
vc.read(frame);
Highgui.imwrite("C:/dmp.jpg", frame);
} catch (Exception e)
{
System.out.println("error opening cam. details: ");
e.printStackTrace();
}
if(vc.isOpened())
System.out.println("cam opened");
}
}
I'm using OpenCV version 2.4.13, and JDK 8u101. I have verified that the rescources exist through a browser using the URL HTTP:///mjpg/video.mjpg. I have installed ffmpeg, but i don't know any way to verify it works properly or is installed correctly. I can access the camera configuration through a browser, also, and get a jpeg image through HTTP:///jpg/image.jpg. Any help would be much appreciated, as I've been working at this for quite some time.
Thanks,
JT
↧
opencv VideoCapture property
Hi,
Is there any API in opencv ( or any programmatic way ) to get the range of a particular property of VideoCapture class like exposure or brightness.
↧
Can OpenCV 3.1.0 be set to capture an RTSP stream over UDP?
Using OpenCV for Java, I've been trying to connect to an RTSP server that will transport video over UDP. The code looks like:
VideoCapture cam = new VideoCapture("rtsp://localhost:8554");
OpenCV sends a setup request to the RTSP server that looks like:
SETUP 127.0.0.1:8554/trackID=1
RTSP/1.0 Transport: RTP/AVP/TCP;unicast;interleaved=0-1
CSeq: 3
The server has to send data over UDP not TCP. I've tried passing in "rtsp://localhost:8554?udp" but that doesn't change the request sent. If I use OpenCV 3.0.0 the setup request that's sent looks like:
SETUP 127.0.0.1:8554/trackID=1
RTSP/1.0 Transport: RTP/AVP/UDP;unicast;client_port=744-7445
CSeq: 3
However, OpenCV 3.0.0 doesn't seem to have VideoWriter while OpenCV 3.1.0 does.
The discussion for a bug report, [RTSP over TCP with cvCreateFileCapture not working (Bug #2235)](http://code.opencv.org/issues/2235), mentions that there was a modification introduced in OpenCV 2.4.11 and 3.1.0, that forces VideoCapture to default to using TCP to transport media. Is data transport over TCP hard coded, or can VideoCapture be somehow set to use UDP?
Also, I tried having the RTSP server send "400 Bad Request" as a response to the TCP transport request, and OpenCV didn't send a follow up request for UDP. It just reported an error:
> warning: Error opening file> (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:578)
↧
↧
can you read H265/HEVC file using VideoCapture? With OpenCV 3.1.
Added x265 codec to ffmpeg in OpenCV 3.1 environment.
I am trying to read a file with Video Capture, but when looking at frame, there is only 8 bits of RGB information. Since it is an H265 file, I was expecting to get 10 bits of RGB.
Could you tell me whether the method or procedure you are getting with H265 is wrong?
----------
[code]
VideoCapture video = new VideoCapture;
video.Open(filename);
if(!video.isOpened())
return -1;
video->get(CAP_PROP_POS_FRAMES);
cv::Mat frame(size,size,CV_64FC3);
video >> frame;
if(!frame.ptr() == NULL)
return -1;
**//Video.get FOURCC is "hvc1"
//video.get FORMAT is 0
//frame.type = 16(CV_8UC3)**
video.Open(filename);
if(!video.isOpened())
return -1;
video->get(CAP_PROP_POS_FRAMES);
cv::Mat frame(size,size,CV_64FC3);
video >> frame;
if(!frame.ptr() == NULL)
return -1;
**//Video.get FOURCC is "hvc1"
//video.get FORMAT is 0
//frame.type = 16(CV_8UC3)**
↧
URGENT:Java VideoCapture is Open, But Mat.read(frame) is Always false
Hey All:
The java code below is supposed to take a snapshot, by reading the from the frame of the connected camera. However, though **camera.isOpened** is True, the **camera.read(frame)** is always false. I've followed several walkthrus from stack overflow and posts from here including: waiting for 11s for the **VideoCapture camera** to open, and inserting **camera.read(frame)** before the " if(camera.read(frame))" because a post solution was stated the first camera.read(frame) is always false. Because of the code implementation, it loops the if-else block until terminated from the console so the ---Output--- below is the first few loops.
Despite that, mine is always false. I added an else block in within the while(true) body, to check if the camera is null, but its not (see Output below). The camera is opencv compatible as well. But nothing so far. Any suggestions help, its a rare case please help. More details are provided at the bottom. Thank you!
---------Java Source Code--------
import org.opencv.core.*;
import org.opencv.highgui.Highgui;
import org.opencv.highgui.VideoCapture;
public class VideoCap {
public static void main (String args[]) throws InterruptedException{
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
VideoCapture camera = new VideoCapture(0); //Tried -1, opened options. But same output
//Thread.sleep(11000);
System.out.println("VidCap isOpened =" + camera.isOpened());
System.out.println("Frame Width: " + camera.get(Highgui.CV_CAP_PROP_FRAME_WIDTH));
System.out.println("Frame Height: " + camera.get(Highgui.CV_CAP_PROP_FRAME_HEIGHT));
if(!camera.isOpened()){
System.out.println("Error");
}
else {
Mat frame = new Mat();
//camera.read(frame); incase the first camera.read = false, as suggested from a post
System.out.println("In progress...");
while(true){
//Print properties of the camera to check for null values.
System.out.println("frame = "+ frame);
System.out.println("camera.read(frame) = "+camera.read(frame));
System.out.println("is camera still open? " + camera.isOpened());
//Always returns false, and prints the else block unlike the youtube video
// (see --Sources and Equipment--)
if (camera.read(frame)){
System.out.println("Frame Obtained");
System.out.println("Captured Frame Width " +
frame.width() + " Height " + frame.height());
Highgui.imwrite("camera.jpg", frame);
System.out.println("OK");
break;
}
else
System.out.println("Else cont... camera read: "+camera.read(frame));
}
}
camera.release();
}
}
-------------Output-------------
VidCap isOpened =true
Frame Width: 640.0
Frame Height: 480.0
In progress...
frame = Mat [ 0*0*CV_8UC1, isCont=false, isSubmat=false, nativeObj=0x26ae6d0, dataAddr=0x0 ]
camera.read(frame) = false
is camera still open? true
Else cont... camera read: false
frame = Mat [ 0*0*CV_8UC1, isCont=false, isSubmat=false, nativeObj=0x26ae6d0, dataAddr=0x0 ]
camera.read(frame) = false
is camera still open? true
Else cont... camera read: false
------Sources & Equipment------
*Youtube video of code: How to install OpenCV and use it with Java and configure it with Eclipse ? Taha Emara
*OpenCV Intallaion: opencv-java-tutorials.readthedocs.io
*Camera from Amazon: ELP Mini Aluminum Black Case Usb Security Camera With 170degree Wide Angle Fisheye Lens And 3Meters USB Cable For Home Security And Machine Vision System
*Computer: Windows 10 64bit, 4G RAM
===================
↧
opencv3 videoCapture doesn't work
I've a problem with openCV 3 java wrapper.
When i use videocapture, camera.grab() always return false. I see several subjects on internet about this problem. I succeeded to run opencv 2.4 but not with version 3.
My environnement :
- windows 10 (64b)
- java 8u51 (32b)
- eclipse mars (32b)
So, I test these methods. Env :
- Set windows path : D:\Programs\opencv3x\build\x86\vc12\bin
- Add opencv_ffmpeg to D:\Programs\opencv3x\build\x86\vc12\bin (in opencv 3, this lib is already in with the good name : opencv_ffmpeg300.dll).
Dev env : In eclipse project :
- add opencv-300.jar
- set the native lib to D:/Programs/opencv3x/build/java/x86
- With this configuration, I can use opencv 3 without problem...but i can't decode video file!
Does anyone have a solution on this? Thx.
↧
cv::VideoCapture::open(file) causes fatal memory error
Hello. After upgrading Ubuntu to trusty, a simple video-load code became producing fatal memory error. The code is just opening an video file:
#include
#include
int main(int argc, char**argv)
{
std::string path="sample/vout2l.avi";
cv::VideoCapture vin(path);
return 0;
}
Executing this causes a fatal memory error:
*** Error in `./cv2-videoread.out': free(): invalid next size (fast): 0x000000000217b590 ***
Abort (core dumped)
I compiled this code with OpenCV 2.4.8 and 2.4.13; both cases finished with the above error.
The backtrace by gdb was:
#0 0x00007fc80d659c37 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007fc80d65d028 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007fc80d6962a4 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3 0x00007fc80d6a255e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#4 0x00007fc80d6a42ef in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#5 0x00007fc80d6a5ba4 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#6 0x00007fc80d6a77d2 in posix_memalign () from /lib/x86_64-linux-gnu/libc.so.6
#7 0x00007fc8092d10fe in av_malloc () from /usr/lib/x86_64-linux-gnu/libavutil.so.52
#8 0x00007fc809512708 in ?? () from /usr/lib/x86_64-linux-gnu/libavformat.so.54
#9 0x00007fc8095139b3 in avio_open2 ()
from /usr/lib/x86_64-linux-gnu/libavformat.so.54
#10 0x00007fc8095b69e3 in avformat_open_input ()
from /usr/lib/x86_64-linux-gnu/libavformat.so.54
#11 0x00007fc80df30584 in CvCapture_FFMPEG::open(char const*) ()
from /usr/lib/libopencv_highgui.so.2.4
#12 0x00007fc80df30719 in cvCreateFileCapture_FFMPEG ()
from /usr/lib/libopencv_highgui.so.2.4
#13 0x00007fc80df32ac9 in cvCreateFileCapture_FFMPEG_proxy(char const*) ()
from /usr/lib/libopencv_highgui.so.2.4
#14 0x00007fc80df1ad89 in cvCreateFileCapture ()
from /usr/lib/libopencv_highgui.so.2.4
#15 0x00007fc80df1b045 in cv::VideoCapture::open(std::string const&) ()
from /usr/lib/libopencv_highgui.so.2.4
---Type to continue, or q to quit---
#16 0x00000000004008a2 in main (argc=, argv=)
at cv2-videoread.cpp:32
It seemed that the issue was caused by libavformat and/or libavutil, so I reinstalled them but it didn't work.
If you have any ideas to fix this issue or investigate more, please let me know. I'm struggling with this issue for these several months.
Many thanks.
↧
↧
I want to capture video from webcam, but window disappears after capturing one frame
Hi. I'm trying to capture video from my webcam via OpenCV C++. The program works fine, but the window terminates after displaying just one frame. I've tried Python examples as well on my computer, and they work as advertised. Help please.
#include
#include
#include
using namespace cv;
using namespace std;
int main()
{
VideoCapture stream1(0); //0 is the id of video device.0 if you have only one camera.
if (!stream1.isOpened())//check if video device has been initialised
cout << "cannot open camera";
//unconditional loop
while (true)
{
Mat cameraFrame;
stream1.read(cameraFrame);
imshow("cam", cameraFrame);
if (waitKey(0) >= 0)
break;
}
return 0;
}
↧
HOW TO USE SIFT WITH VIDEO
I WANT TO RUN SIFT (OPENCV 2.9 VERION),
BUT NOT ONLY on ONE frame.
I want to use my webcam live and see the result on the video
how to change my code ?
#include
#include
using namespace cv;
int main()
{
Mat img = imread("C:\\Users\\user\\Documents\\Visual Studio 2015\\Projects\\yakir-baby-analistic\\yakir1\\255.jpg", CV_LOAD_IMAGE_COLOR); // Read the file
SIFT sift;
vector key_points;
Mat descriptors;
sift(img, Mat(), key_points, descriptors);
Mat output_img;
drawKeypoints(img, key_points, output_img);
namedWindow("Image");
imshow("Image", output_img);
waitKey(0);
destroyWindow("Image");
return 0;
}
↧
VideoCapture - passed invalid camera ID
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
The above snippet will fail if invalid parameter is passed to the constructor.
The error returned exits the program.
How do I validate the camera ID prior to executing the VideoCapture?
I need to control the program flow and NOT exit the program via VideoCapture class.
↧