Página 1 dos resultados de 8410 itens digitais encontrados em 0.029 segundos

Synthtic OCT Data for image processing performance testing

Serranho, Pedro; Maduro, Cristina; Santos, Torcato; Vaz, José Cunha; Bernardes, Rui
Fonte: Universidade Aberta de Portugal Publicador: Universidade Aberta de Portugal
Tipo: Conferência ou Objeto de Conferência
Publicado em //2011 ENG
Relevância na Pesquisa
66.16%
The use of synthetic images is needed for testing the performance of image processing methods in order to establish a ground truth to test performance metrics. However, these synthetic images do not represent real applications. The aim of this paper is to build a mathematical model to obtain a synthetic noise-free image mimicking a real Optical Coherence Tomography (OCT) B-scan or volume from the human retina, in order to establish a ground truth for filtering performance metrics in this context. Moreover we also suggest a method to add speckle noise to this image based on the speckle noise of the given OCT volume. In this way we establish a replicable method to obtain a ground truth for image processing performance metrics that actually mimics a real case.

Medida da distância reflexo margem por meio de processamento computadorizado de imagens em usuários de lentes de contato rígidas; Margin reflex distance measure by computerized image processing in rigid contact lens wearers

BURMANN, Tiana Gabriela; VALIATTI, Fabiana Borba; CORREA, Zélia Maria; BAYER, Márcia; MARCON, Ítalo
Fonte: Conselho Brasileiro de Oftalmologia Publicador: Conselho Brasileiro de Oftalmologia
Tipo: Artigo de Revista Científica
POR
Relevância na Pesquisa
66.2%
OBJETIVO: Apresentar um método novo, baseado no processamento computadorizado de imagens, para quantificar a distância reflexo margem (MRD). MÉTODOS: Selecionamos para o estudo pacientes do Setor de Lentes de Contato do Serviço de Oftalmologia da Santa Casa de Porto Alegre que foram divididos em dois grupos: o primeiro foi composto por pacientes usuários de lentes de contato rígidas (63 olhos) e o segundo por pacientes que foram encaminhados para adaptação de lentes de contato sem história prévia de uso das mesmas (30 olhos). Todos os pacientes foram fotografados com o auxílio de uma câmera fotográfica digital (Nikon Coolpix 4300). A distância reflexo margem foi medida por processamento computadorizado de imagens utilizando o programa Image J. Foram excluídos do estudo pacientes submetidos a cirurgias intra-oculares ou palpebrais, pacientes apresentando ptose congênita e pacientes que ao exame biomicroscópico apresentavam conjuntivite papilar gigante. RESULTADOS: O método utilizado para quantificar distância reflexo margem parece bastante simples e aparentemente mais sensível e específico. O valor médio da distância reflexo margem no grupo caso foi 2,46 mm e no grupo controle 2,72 mm. Dessa forma, observou-se uma tendência de diminuir a distância reflexo margem com o uso de lentes rígidas...

Ambiente para avaliação de algoritmos de processamento de imagens médicas.; Environment for medical image processing algorithms assessment.

Santos, Marcelo dos
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 20/12/2006 PT
Relevância na Pesquisa
66.25%
Constantemente, uma variedade de novos métodos de processamento de imagens é apresentada à comunidade. Porém poucos têm provado sua utilidade na rotina clínica. A análise e comparação de diferentes abordagens por meio de uma mesma metodologia são essenciais para a qualificação do projeto de um algoritmo. Porém, é difícil comparar o desempenho e adequabilidade de diferentes algoritmos de uma mesma maneira. A principal razão deve-se à dificuldade para avaliar exaustivamente um software, ou pelo menos, testá-lo num conjunto abrangente e diversificado de casos clínicos. Muitas áreas - como o desenvolvimento de software e treinamentos em Medicina - necessitam de um conjunto diverso e abrangente de dados sobre imagens e informações associadas. Tais conjuntos podem ser utilizados para desenvolver, testar e avaliar novos softwares clínicos, utilizando dados públicos. Este trabalho propõe o desenvolvimento de um ambiente de base de imagens médicas de diferentes modalidades para uso livre em diferentes propósitos. Este ambiente - implementado como uma arquitetura de base distribuída de imagens - armazena imagens médicas com informações de aquisição, laudos, algoritmos de processamento de imagens, gold standards e imagens pós-processadas. O ambiente também possui um modelo de revisão de documentos que garante a qualidade dos conjuntos de dados. Como exemplo da facilidade e praticidade de uso...

Pré-processamento digital de imagens obtidas na faixa espectral do infravermelho distante; Digital image processing in the longwave infrared spectral range

Bittencourt, Thiago de Morais Gonçalves
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 14/09/2012 PT
Relevância na Pesquisa
66.16%
Este trabalho apresenta a pesquisa e desenvolvimento de algoritmos de pré-processamento digital de imagens para câmeras térmicas não refrigeradas na faixa espectral do infravermelho distante. O estudo de câmeras infravermelhas é uma questão estratégica, uma vez que tem aplicações militares, civis e científicas. Este trabalho define a concepção e implementação de algoritmos de pré-processamento de imagem necessários para obter imagens com baixo ruído e alto contraste, tais como: correção de não-uniformidade, substituição de pixels defeituosos, geração de histograma, aumento de contraste e processamento de saída do pixel, com taxa de 30 quadros por segundo, utilizando detector não-resfriado com matriz de plano focal de 320 x 240 pixels. Neste trabalho todos os algoritmos foram implementados em software para se obter resultados rapidamente e, assim, facilitar a validação dos códigos. Foram gerados resultados de caracterização eletro-óptica do sistema montado com indicação das principais figuras de mérito que norteiam o estudo desta tecnologia, tais como: componentes de ruído tridimensionais, potência equivalente de ruído, responsividade e relação sinal-ruído. Os resultados indicam que os algoritmos de pré-processamento de imagem propostos aumentam a qualidade da imagem a ser exibida...

Seleção de casos de teste para sistemas de processamento de imagens utilizando conceitos de CBIR; Test Case Selection For Image Processing Systems Using CBIR Concepts.

Narciso, Everton Note
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 29/10/2013 PT
Relevância na Pesquisa
66.16%
Os sistemas de processamento de imagens exercem um papel importante no que tange à emulação da visão humana, pois grande parte das informações que as pessoas obtêm do mundo real ocorre por meio de imagens. Desenvolver tais sistemas é uma tarefa complexa e que requer testes rigorosos para garantir a sua confiabilidade. Neste cenário, a seleção de casos de teste é fundamental, pois ajuda a eliminar os dados de teste redundantes e desnecessários enquanto procura manter altas taxas de detecção de erros. Na literatura há várias abordagens para seleção de casos de teste com foco em sistemas de entradas/saídas alfanuméricas, mas a seleção voltada a sistemas complexos (e.g. processamento de imagens) ainda é pouco explorada. Visando a contribuir neste campo de pesquisa, este trabalho apresenta um novo método intitulado Tcs&CbIR, que seleciona e recupera um subconjunto de imagens a partir de um vasto conjunto de teste. Os testes realizados com dois programas de processamento de imagens mostram que a nova abordagem pode superar a seleção aleatória pois, no contexto de avaliação apresentado, a quantidade de casos de teste necessária para revelar a presença de erros foi reduzida em até 87%. Os resultados obtidos revelam...

Exploração do paralelismo em arquiteturas para processamento de imagens e vídeo; Parallelism exploration in architectures for video and image processing

Soares, Andre Borin
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Tese de Doutorado Formato: application/pdf
POR
Relevância na Pesquisa
66.23%
O processamento de vídeo e imagens é uma área de pesquisa de grande importância atualmente devido ao incremento de utilização de imagens nas mais variadas áreas de atividades: entretenimento, vigilância, supervisão e controle, medicina, e outras. Os algoritmos utilizados para reconhecimento, compressão, descompressão, filtragem, restauração e melhoramento de imagens apresentam freqüentemente uma demanda computacional superior àquela que os processadores convencionais podem oferecer, exigindo muitas vezes o desenvolvimento de arquiteturas dedicadas. Este documento descreve o trabalho realizado na exploração do espaço de projeto de arquiteturas para processamento de imagem e de vídeo, utilizando processamento paralelo. Várias características particulares deste tipo de arquitetura são apontadas. Uma nova técnica é apresentada, na qual Processadores Elementares (P.E.s) especializados trabalham de forma cooperativa sobre uma estrutura de comunicação em rede intra-chip; Nowadays video and image processing is a very important research area, because of its widespread use in a broad class of applications like entertainment, surveillance, control, medicine and many others. Some of the used algorithms to perform recognition...

Mimetic finite difference methods in image processing

Bazan,C.; Abouali,M.; Castillo,J.; Blomgren,P.
Fonte: Sociedade Brasileira de Matemática Aplicada e Computacional Publicador: Sociedade Brasileira de Matemática Aplicada e Computacional
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/01/2011 EN
Relevância na Pesquisa
66.2%
We introduce the use of mimetic methods to the imaging community, for the solution of the initial-value problems ubiquitous in the machine vision and image processing and analysis fields. PDE-based image processing and analysis techniques comprise a host of applications such as noise removal and restoration, deblurring and enhancement, segmentation, edge detection, inpainting, registration, motion analysis, etc. Because of their favorable stability and efficiency properties, semi-implicit finite difference and finite element schemes have been the methods of choice (in that order of preference). We propose a new approach for the numerical solution of these problems based on mimetic methods. The mimetic discretization scheme preserves the continuum properties of the mathematical operators often encountered in the image processing and analysis equations. This is the main contributing factor to the improved performance of the mimetic method approach, as compared to both of the aforementioned popular numerical solution techniques. To assess the performance of the proposed approach, we employ the Catté-Lions-Morel-Coll model to restore noisy images, by solving the PDE with the three numerical solution schemes. For all of the benchmark images employed in our experiments...

Geological image processing of petrographic thin sections using the rotating polarizer stage

Goodchild, J. Scott.
Fonte: Brock University Publicador: Brock University
Tipo: Electronic Thesis or Dissertation
ENG
Relevância na Pesquisa
66.24%
One of the fundamental problems with image processing of petrographic thin sections is that the appearance (colour I intensity) of a mineral grain will vary with the orientation of the crystal lattice to the preferred direction of the polarizing filters on a petrographic microscope. This makes it very difficult to determine grain boundaries, grain orientation and mineral species from a single captured image. To overcome this problem, the Rotating Polarizer Stage was used to replace the fixed polarizer and analyzer on a standard petrographic microscope. The Rotating Polarizer Stage rotates the polarizers while the thin section remains stationary, allowing for better data gathering possibilities. Instead of capturing a single image of a thin section, six composite data sets are created by rotating the polarizers through 900 (or 1800 if quartz c-axes measurements need to be taken) in both plane and cross polarized light. The composite data sets can be viewed as separate images and consist of the average intensity image, the maximum intensity image, the minimum intensity image, the maximum position image, the minimum position image and the gradient image. The overall strategy used by the image processing system is to gather the composite data sets...

Image processing and analysis for autonomous grapevine pruning.

Gao, Ming
Fonte: Universidade de Adelaide Publicador: Universidade de Adelaide
Tipo: Tese de Doutorado
Publicado em //2011
Relevância na Pesquisa
66.19%
In recent years, efforts are made to automate vineyard operations to cap the ever increasing labour cost. However, one of the operations that have not been completely automated is grapevine pruning. A robotic machine for grapevine pruning needs to respond to the changing physical characteristics of the environment, and to date, no algorithm can accurately identified appropriate positions for grapevine pruning in a variety of environmental conditions. The aim of this research was therefore to develop a new algorithm using image processing, image analysis and stereo vision system to determine pruning positions and making automatic grapevine pruning possible. In order to get the pruning positions accurately and automatically, images taken from two cameras are processed and analysed. Utilizing the latest computer vision techniques, the algorithm takes three steps before the final cutting positions are derived. First, the uploaded images are pre-processed by the so called image processing phase during which binary image is obtained from the original image. Second, image analysis technique is employed to identify different parts of grape vine and obtain the 2D positions of the cutting points. Novel algorithms are proposed to locate the cordon...

Dignitätskriterien der Mammasonographie unter Anwendung des Real-Time Compound-Images in Kombination mit dem XRES Adaptive Image Processing; Sonographic Criteria for the Differentiation of Benign and Malignant Breast Lesions using Real-Time Spatial Compound Imaging in Combination with XRES Adaptive Image Processing

Roessner, Lisa Charlotte
Fonte: Universidade de Tubinga Publicador: Universidade de Tubinga
Tipo: Dissertação
DE_DE
Relevância na Pesquisa
66.2%
Insgesamt 460 Patienten im Alter von durchschnittlich 50,9 Jahren (Range: 17,5- 91,3 Jahre) wurden in diese retrospektive Studie eingeschlossen. Alle Patienten erhielten eine Sonographie unter Anwendung der Bildoptimierungsverfahren des „SonoCT Real-Time Compound Imaging“ und „XRES Adaptive Image Processing“. Zur Verwendung kamen die Ultraschallgeräte iU 22 und HD11 von der Firma Philips (Hamburg). Alle Befunde wurden durch 7 erfahrene Untersucher im Ultraschall hinsichtlich ihrer Dignitätskriterien untersucht und nach BI-RADS klassifiziert. Die Dignitätskriterien umfassten Form, Achse, Rand, Echogenität und Schallfortleitung. Die Verteilung der BI-RADS-Kategorisierung wurde erfasst und hinsichtlich des positiv prädiktiven Wertes (ppV), des negativ prädiktiven Wertes (npV), der Sensitivität und der FPR beurteilt. Die einzelnen Dignitätskriterien wurden im Rahmen einer Regressionsanalyse bezüglich ihrer Wertigkeit überprüft und miteinander verglichen. Bei allen Patienten lag zum Zeitpunkt der Aufnahme in die Studie das pathologische Ergebnis der Gewebeprobe vor. Zunächst wurde die Häufigkeitsverteilung der histologischen Ergebnisse ermittelt. Insgesamt handelt es sich hierbei um 269 benigne und 191 maligne Befunde. Unter den benignen Befunden war die fibrös-zystische Mastopathie bzw. einfach-fibröse Mastopathie mit 93/269 (35%) am häufigsten...

A recursive algorithm for digital Image Processing using Local Statistics

de Figueiredo, Rui J.P.; de Figueiredo, Rui J.P.
Fonte: Universidade Rice Publicador: Universidade Rice
Tipo: Relatório
ENG
Relevância na Pesquisa
66.16%
Tech Report; An algorithm is presented for digital image processing based on local statistics. The algorithm constitutes a recursive implementaiton of an approach proposed and implemented nonrecursively by J.S. Lee (Naval Research Laboratory Report 8192, March 1978). Calculations show that the proposed recursion introduces considerable improvement in efficiency.

Image processing for displacement measurements

Almeida, Maria da Graça Vieira de Brito
Fonte: Universidade Nova de Lisboa Publicador: Universidade Nova de Lisboa
Tipo: Tese de Doutorado
Publicado em /09/2014 ENG
Relevância na Pesquisa
66.21%
Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).

Algorithms for selecting parameters of combination of acyclic adjacency graphs in the problem of texture image processing

Viet Sang, Dinh
Fonte: Universidade Autônoma de Barcelona Publicador: Universidade Autônoma de Barcelona
Tipo: Artigo de Revista Científica Formato: application/pdf; application/pdf; application/pdf
Publicado em //2014 ENG
Relevância na Pesquisa
66.22%
Advisors: Prof. Sergey Dvoenko. Date and location of PhD thesis defense: 24 October 2013, Dorodnicyn Computing Centre of Russian Academy of Sciences; Nowadays the great interest of researchers in the problem of processing the interrelated data arrays including images is retained. In the modern theory of machine learning, the problem of image processing is often viewed as a problem in the field of graph models. Image pixels constitute a unique array of interrelated elements. The interrelations between array elements are represented by an adjacency graph. The problem of image processing is often solved by minimizing Gibbs energy associated with corresponding adjacency graphs. The crucial disadvantage of Gibbs approach is that it requires empirical specifying of appropriate energy functions on cliques. In the present work, we investigate a simpler, but not less effective model, which is an expansion of the Markov chain theory. Our approach to image processing is based on the idea of replacing the arbitrary adjacency graphs by tree-like (acyclic in general) ones and linearly combining of acyclic Markov models in order to get the best quality of restoration of hidden classes. In this work, we propose algorithms for tuning combination of acyclic adjacency graphs.

A Window-Oriented User-Interface for Image Processing Systems on UNIX based workstations.

Mehta, Sandeep
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
66.25%
The advent of digital image processing has led to the availability of a very large number of software systems. However there is an absence of a cohesive general-purpose image processing environment isolated from hardware or the underlying operating system. The trend in computing in science and engineering, is towards distributed workstations, especially with the availability of high-performance microprocessors. Hence there is a need for a unified software environment on workstations for use in scientific applications. This thesis describes the design and implementation of a window oriented user-interface. The interface runs on top of a Image Processing system, running on workstations under the UNIX † environment and uses the network transparent X window system ‡. The visual shell-like environment is targeted at the end-user with a scientific background needing image processing capabilities, but not necessarily with a computer background. The User-Interface is primarily a tool for use by a single user, although the underlying system operates in a multiuser multitasking environment. The objective of this exercise is aimed at providing rapid, easy and visual capability in processing images at a session level. It integrates graphics capabilities with high speed computing. All processing capabilities provided at the command line level...

Quadtree algorithms for image processing

Benjamin, Jim Isaac
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
66.22%
The issue of constructing a computer-searchable image encoding algorithm for complex images and the effect of this encoded image on algorithms for image processing are considered. A regular decomposition of image (picture) area into successively smaller bounded homogeneous quadrants is defined. This hierarchical search is logarithmic, and the resulting picture representation is shown to enable rapid access of the image data to facilitate geometric image processing applications (i.e. scaling, rotation), and efficient storage. The approach is known as quadtree (Q-Tree) encoding. The applications in this thesis are primarily to grayscale pixel images as opposed to simple binary images.

A digital image processing system for slow scan television

Schueckler, James
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
66.25%
This paper describes a low cost, but powerful, digital image processing system. Although general purpose, it was designed for experimentation with amateur radio slow scan television (SSTV). The project involved conceptualizing, designing, building, programming, and operating the complete integrated system. The image format is the SSTV standard: 128 picture elements (pixels) per line, 128 lines, 4 bits per pixel. The IEEE Standard 696 (S-100) microcomputer bus was used for system interconnect to provide flexibility and expandability. Several microcomputer boards were purchased, and two image interface boards were designed and built by the author. One image interface board provides for transmitting and receiving images at SSTV rates. The other board can accept a single frame image from a standard TV camera, store it in memory, and continuously output an image from memory to a video monitor. Any image in memory can be analyzed or modified, pixel by pixel, by programs running on the system's Z80 micro processor. Simple keyboard commands can initiate many digital image processing programs that were written for the system, including the powerful point processing and two dimensional convolution functions.

Image processing applied to small format displays

Lillie, Jeffrey; Oberoi, Anirudh
Fonte: SID Publicador: SID
Tipo: Artigo de Revista Científica Formato: 173902 bytes; application/pdf
EN
Relevância na Pesquisa
66.22%
The application of TFT displays for handset applications requires new display system architectures to have on-chip image processing in order to accommodate a direct camera connection. Other image processing techniques like Stochastic dithering are required to reduce the size of the on-chip frame buffer thus reducing power and silicon real estate utilization, while preserving high image quality. The purpose of this paper is to explore the applicability of certain key image processing techniques, which add value to display column drivers with respect to their typical usage. The benefits and caveats of these image processing methods with respect to power and system bandwidth are articulated and their hardware requirements are discussed.; Final paper, presented at SID Conference, Seattle, WA, May 2004; National Semiconductor

Image transformation into device dependent color printer description using 4th-order polynomial regression and LabView object oriented programming development of image processing modules

Mongeon, Michael
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
66.27%
This thesis investigates the development of printer device profiles used in color document printing system environments when devices with intrinsically different gamut capabilities communicate with one another in a common (CIELAB) color space. While the main thrust of this activity focuses on the output printer, namely the Xerox 5760 printer, and its rendition of some device independent image description, characterizations are provided which investigate relative areas of photographic, monitor, and printer gamuts using a visual hue leaf comparison between devices at standard hue angles determined from a Kodak photographic Q60C. The printer is modeled using 4th-order polynomial regression which maps the device independent CIELAB image representation into device dependent printer CMYK. This technique results in 1.57 AEavg over the 360 training data set. Some key properties of the proposed calibration method are as follows: Linearized CMYK tone reproduction curves with respect to AEpaper to improve the distribution of calibration data in color space. Application of GCR strategy and linearization to the calibration target but perform the regression on the measured CIELAB and original CMY values. Each GCR strategy does not remove CMY so that printer gamut is maximized. This method relies on the regression to determine the appropriate CMY removal. Four GCR strategies were explored: Colorimetric analysis indicates that the 4th order correction plus printer stability accounts for 2.5 to 3.2 AEavg over 200 "in gamut" colors in the Q60 over the four GCR strategies. Approximately 1 .0 AEavg was attributed to scanner calibration and an additional 1.0 AEavg for out of gamut colors. A library of image processing algorithms is included...

Discrete wavelet transform core for image processing applications

Savakis, Andreas; Carbone, Richard
Fonte: The International Society for Optical Engineering (SPIE) Publicador: The International Society for Optical Engineering (SPIE)
Tipo: Artigo de Revista Científica
EN_US
Relevância na Pesquisa
66.21%
This paper presents a flexible hardware architecture for performing the Discrete Wavelet Transform (DWT) on a digital image. The proposed architecture uses a variation of the lifting scheme technique and provides advantages that include small memory requirements, fixed-point arithmetic implementation, and a small number of arithmetic computations. The DWT core may be used for image processing operations, such as denoising and image compression. For example, the JPEG2000 still image compression standard uses the Cohen-Daubechies-Favreau (CDF) 5/3 and CDF 9/7 DWT for lossless and lossy image compression respectively. Simple wavelet image denoising techniques resulted in improved images up to 27 dB PSNR. The DWT core is modeled using MATLAB and VHDL. The VHDL model is synthesized to a Xilinx FPGA to demonstrate hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons. The execution time for performing both DWTs is nearly identical at approximately 14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is around 15,000 gates using only 5% of the Xilinx FPGA hardware area, at 2.185 MHz max clock speed and 24 mW power consumption.; "Discrete wavelet transform core for image processing applications...

A Study of the use of SIMD instructions for two image processing algorithms

Welch, Eric
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
66.17%
Many media processing algorithms suffer from long execution times, which are most often not acceptable from an end user point of view. Recently, this problem has been exacerbated because media has higher resolution. One possible solution is through the use of Single Instruction Multiple Data (SIMD) architectures, such as ARM's NEON. These architectures take advantage of the parallelism in media processing algorithms by operating on multiple pieces of data with just one instruction. SIMD instructions can significantly decrease the execution time of the algorithm, but require more time to implement. This thesis studies the use of SIMD instructions on a Cortex-A8 processor with NEON SIMD coprocessor. Both image processing algorithms, bilinear interpolation and distortion, are altered to process multiple pixels or colors simultaneously using the NEON coprocessor's instruction set. The distortion algorithm is also altered at the assembly level through the removal of memory accesses and branches, adding data prefetch instructions, and interlacing ARM and NEON instructions. Altering the assembly code requires a deeper understanding of the code and more time, but allows for more control and higher speedups. The theoretical speedup for the bilinear interpolation and distortion algorithms is three and four times respectively. The actual measured speedup for the bilinear interpolation algorithm is more than two times...