processing on a live webcam streaming video???
26 次查看(过去 30 天)
显示 更早的评论
I am using an image acquisition toolbox to obtain the live stream from my webcam. I have my own code for skin color segmentation which works well on pictures (JPEG etc). I am only able to get a snapshot of the live video and then apply the code. I need to apply that code on the live webcam video stream. How can i do it???
1 个评论
David Tarkowski
2011-5-11
I'm not sure what you mean, a video stream is just a series of snapshots at a specified interval. Can you elaborate on what you are doing and how it differs from what you would like?
回答(3 个)
Harsha Vardhan Rao Avunoori
2011-5-19
Well I think you can try using the Image Acquisition ToolBox for your case it will definitely help you...
For more information on Image Acquisition ToolBox have a look at this website
0 个评论
Hashem Burki
2011-5-25
4 个评论
arun joshi
2018-6-14
will the output video be as smooth as a live stream. i tried capturing live feed images from the camera but the processed output seems to lag as the frames are being skipped during processing. By the time it gets to the next frame after processing one frame, a number of frames from the live feed are missed out resulting in a jerky video output. is there a way to solve this?
Walter Roberson
2018-6-20
"will the output video be as smooth as a live stream"
As we were discussing in https://www.mathworks.com/matlabcentral/answers/7202-processing-on-a-live-webcam-streaming-video#comment_579179, you are using the Automated Driving toolbox. You are limited by the functions in that toolbox that you call. This is probably not an issue of the webcam interface being slow: this is quite likely an issue that you are doing extensive computation with each frame.
No computer can respond instantly. If you want a frame to be ready to output within 1/30 of a second so that you can get 30 fps output, then you need to restrict yourself to computations that that computer can finish within 1/30 of a second. It does not matter whether the camera interface takes 30 microseconds per frame or 30 milliseconds per frame (33 1/3 fps) if the computation you are doing takes (say) 1/4 second. That is true whether that (say) 1/4 second per frame is because you are using slow hardware; or because you are trying to use more RAM than you have available and the program is swapping; or because you wrote inefficient code such as failing to pre-initialize arrays that you are storing into; or because you are just plain trying to compute things more complex than current computer speeds could reasonably get done in that length of time.
The Automated Driving Toolbox is a premium toolbox; very few of the regular volunteers here have access to even the documentation for it, so it is difficult for us to make suggestions (especially since you do not include any code.)
Florian Morsch
2018-6-15
编辑:Florian Morsch
2018-6-15
You way to do it would be:
1: Create a webcam object EDIT: or create a cam object
2: You can get screenshots/snapshots from the object
3: Create a videoplayer object
4: Now you run a while loop and take a snapshot, process it and give it to the videoPlayer as output, which can step through the frames he gets. The FPS is limited to your code, if you need 1 second to work with a frame then you will get 1 FPS, if your code is fast enough you might end up with 10-15 FPS or more.
Now its up to you how fast your code works to get a smoother "video". RealTime is mostly not possible since you need to do some operations on the frame which means you have processing time (except your code is really really good and you can manage to do the processing in like under 20ms, which would be 50 FPS).
11 个评论
Walter Roberson
2018-6-20
For reasonable frame rates, you could probably store all of the frames in memory as they arrive -- at least until your RAM filled up.
However, storing the frames in memory is not the challenge for you. The challenge for you is getting the processing time for any one frame down to less than the interval between frames. If you were reading frames in at (say) 10 frames per second, but it took (say) 1/5 seconds per frame to do your computation, then if you were to process every frame you would consistently be processing at only half of real-time.
You need to put in a tic() call at the point where you have received a frame and start analyzing it, and a toc() at the point where you have finished analyzing the frame and are ready to display the output. If the time to process one frame exceeds the time interval between frames, then you will fall behind.
The situation to worry about how slow the camera interface is, is the situation where you are able to demonstrate that you can finish analyzing a frame faster than real time, but then find that the display process is holding you up. Ask yourself "If the frame display process were to be reduced to zero by a really fast fast output routine, then is it true that I would be able to process the frames in real time, or is my processing of the frames taking too long?"
Florian Morsch
2018-6-21
Refering to your comment: You could store the frames and then analyse every frame, but that would be video processing, not real time processing, if you code is not fast enough for real time in the first place. That would be like taking a video, process every frame and then rewatch it. Sure, you would get no "lag" like in the stream processing, but its not real time anymore, its a video you watch.
Some things can help you to make the code faster (preallocating can reduce the processing time by a lot) see: https://de.mathworks.com/help/matlab/matlab_prog/preallocating-arrays.html
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 MATLAB Support Package for IP Cameras 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!