Search results for: image processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5835

Search results for: image processing

4845 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.

Keywords: geo-referencing, ortho-rectification, video frame, self-calibration

Procedia PDF Downloads 478
4844 View Synthesis of Kinetic Depth Imagery for 3D Security X-Ray Imaging

Authors: O. Abusaeeda, J. P. O. Evans, D. Downes

Abstract:

We demonstrate the synthesis of intermediary views within a sequence of X-ray images that exhibit depth from motion or kinetic depth effect in a visual display. Each synthetic image replaces the requirement for a linear X-ray detector array during the image acquisition process. Scale invariant feature transform, SIFT, in combination with epipolar morphing is employed to produce synthetic imagery. Comparison between synthetic and ground truth images is reported to quantify the performance of the approach. Our work is a key aspect in the development of a 3D imaging modality for the screening of luggage at airport checkpoints. This programme of research is in collaboration with the UK Home Office and the US Dept. of Homeland Security.

Keywords: X-ray, kinetic depth, KDE, view synthesis

Procedia PDF Downloads 265
4843 Analysis of Two Phase Hydrodynamics in a Column Flotation by Particle Image Velocimetry

Authors: Balraju Vadlakonda, Narasimha Mangadoddy

Abstract:

The hydrodynamic behavior in a laboratory column flotation was analyzed using particle image velocimetry. For complete characterization of column flotation, it is necessary to determine the flow velocity induced by bubbles in the liquid phase, the bubble velocity and bubble characteristics:diameter,shape and bubble size distribution. An experimental procedure for analyzing simultaneous, phase-separated velocity measurements in two-phase flows was introduced. The non-invasive PIV technique has used to quantify the instantaneous flow field, as well as the time averaged flow patterns in selected planes of the column. Using the novel particle velocimetry (PIV) technique by the combination of fluorescent tracer particles, shadowgraphy and digital phase separation with masking technique measured the bubble velocity as well as the Reynolds stresses in the column. Axial and radial mean velocities as well as fluctuating components were determined for both phases by averaging the sufficient number of double images. Bubble size distribution was cross validated with high speed video camera. Average turbulent kinetic energy of bubble were analyzed. Different air flow rates were considered in the experiments.

Keywords: particle image velocimetry (PIV), bubble velocity, bubble diameter, turbulent kinetic energy

Procedia PDF Downloads 510
4842 A Framework of Product Information Service System Using Mobile Image Retrieval and Text Mining Techniques

Authors: Mei-Yi Wu, Shang-Ming Huang

Abstract:

The online shoppers nowadays often search the product information on the Internet using some keywords of products. To use this kind of information searching model, shoppers should have a preliminary understanding about their interesting products and choose the correct keywords. However, if the products are first contact (for example, the worn clothes or backpack of passengers which you do not have any idea about the brands), these products cannot be retrieved due to insufficient information. In this paper, we discuss and study the applications in E-commerce using image retrieval and text mining techniques. We design a reasonable E-commerce application system containing three layers in the architecture to provide users product information. The system can automatically search and retrieval similar images and corresponding web pages on Internet according to the target pictures which taken by users. Then text mining techniques are applied to extract important keywords from these retrieval web pages and search the prices on different online shopping stores with these keywords using a web crawler. Finally, the users can obtain the product information including photos and prices of their favorite products. The experiments shows the efficiency of proposed system.

Keywords: mobile image retrieval, text mining, product information service system, online marketing

Procedia PDF Downloads 359
4841 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 161
4840 The Influence of Different Technologies on the Infiltration Properties and Soil Surface Crusting Processing in the North Bohemia Region

Authors: Miroslav Dumbrovsky, Lucie Larisova

Abstract:

The infiltration characteristic of the soil surface is one of the major factors that determines the potential soil degradation risk. The physical, chemical and biological characteristic of soil is changed by the processing of soil. The infiltration soil ability has an important role in soil and water conservation. The subject of the contribution is the evaluation of the influence of the conventional tillage and reduced tillage technology on soil surface crusting processing and infiltration properties of the soil in the North Bohemia region. Field experimental work at the area was carried out in the years 2013-2016 on Cambisol district medium-heavy clayey soil. The research was conducted on sloping erosion-endangered blocks of compacted arable land. The areas were chosen each year in the way that one of the experimental areas was handled by conventional tillage technologies and the other by reduced tillage technologies. Intact soil samples were taken into Kopecký´s cylinders in the three landscape positions, at a depth of 10 cm (representing topsoil) and 30 cm (representing subsoil). The cumulative infiltration was measured using a mini-disc infiltrometer near the consumption points. The Zhang method (1997), which provides an estimate of the unsaturated hydraulic conductivity K(h), was used for the evaluation of the infiltration tests of the mini-disc infiltrometer. The soil profile processed by conventional tillage showed a higher degree of compaction and soil crusting processing. The bulk density was between 1.10–1.67 g.cm⁻³, compared to the land processed by the reduced tillage technology, where the values were between 0.80–1.29 g.cm⁻³. Unsaturated hydraulic conductivity values were about one-third higher within the reduced tillage technology soil processing.

Keywords: soil crusting processing, unsaturated hydraulic conductivity, cumulative infiltration, bulk density, porosity

Procedia PDF Downloads 247
4839 Depth Estimation in DNN Using Stereo Thermal Image Pairs

Authors: Ahmet Faruk Akyuz, Hasan Sakir Bilge

Abstract:

Depth estimation using stereo images is a challenging problem in computer vision. Many different studies have been carried out to solve this problem. With advancing machine learning, tackling this problem is often done with neural network-based solutions. The images used in these studies are mostly in the visible spectrum. However, the need to use the Infrared (IR) spectrum for depth estimation has emerged because it gives better results than visible spectra in some conditions. At this point, we recommend using thermal-thermal (IR) image pairs for depth estimation. In this study, we used two well-known networks (PSMNet, FADNet) with minor modifications to demonstrate the viability of this idea.

Keywords: thermal stereo matching, deep neural networks, CNN, Depth estimation

Procedia PDF Downloads 279
4838 Dynamic Foot Pressure Measurement System Using Optical Sensors

Authors: Tanapon Keatsamarn, Chuchart Pintavirooj

Abstract:

Foot pressure measurement provides necessary information for diagnosis diseases, foot insole design, disorder prevention and other application. In this paper, dynamic foot pressure measurement is presented for pressure measuring with high resolution and accuracy. The dynamic foot pressure measurement system consists of hardware and software system. The hardware system uses a transparent acrylic plate and uses steel as the base. The glossy white paper is placed on the top of the transparent acrylic plate and covering with a black acrylic on the system to block external light. Lighting from LED strip entering around the transparent acrylic plate. The optical sensors, the digital cameras, are underneath the acrylic plate facing upwards. They have connected with software system to process and record foot pressure video in avi file. Visual Studio 2017 is used for software system using OpenCV library.

Keywords: foot, foot pressure, image processing, optical sensors

Procedia PDF Downloads 247
4837 Edge Detection Using Multi-Agent System: Evaluation on Synthetic and Medical MR Images

Authors: A. Nachour, L. Ouzizi, Y. Aoura

Abstract:

Recent developments on multi-agent system have brought a new research field on image processing. Several algorithms are used simultaneously and improved in deferent applications while new methods are investigated. This paper presents a new automatic method for edge detection using several agents and many different actions. The proposed multi-agent system is based on parallel agents that locally perceive their environment, that is to say, pixels and additional environmental information. This environment is built using Vector Field Convolution that attract free agent to the edges. Problems of partial, hidden or edges linking are solved with the cooperation between agents. The presented method was implemented and evaluated using several examples on different synthetic and medical images. The obtained experimental results suggest that this approach confirm the efficiency and accuracy of detected edge.

Keywords: edge detection, medical MRImages, multi-agent systems, vector field convolution

Procedia PDF Downloads 391
4836 Visual Search Based Indoor Localization in Low Light via RGB-D Camera

Authors: Yali Zheng, Peipei Luo, Shinan Chen, Jiasheng Hao, Hong Cheng

Abstract:

Most of traditional visual indoor navigation algorithms and methods only consider the localization in ordinary daytime, while we focus on the indoor re-localization in low light in the paper. As RGB images are degraded in low light, less discriminative infrared and depth image pairs are taken, as the input, by RGB-D cameras, the most similar candidates, as the output, are searched from databases which is built in the bag-of-word framework. Epipolar constraints can be used to relocalize the query infrared and depth image sequence. We evaluate our method in two datasets captured by Kinect2. The results demonstrate very promising re-localization results for indoor navigation system in low light environments.

Keywords: indoor navigation, low light, RGB-D camera, vision based

Procedia PDF Downloads 460
4835 Studying the Value-Added Chain for the Fish Distribution Process at Quang Binh Fishing Port in Vietnam

Authors: Van Chung Nguyen

Abstract:

The purpose of this study is to study the current status of the value chain for fish distribution at Quang Binh Fishing Port with 360 research samples in which the research subjects are fishermen, traders, retailers, and businesses. The research uses the approach of applying the value chain theoretical framework of Kaplinsky and Morris to quantify and describe market channels and actors participating in the value chain and analyze the value-added process of these companies according to market channels. The analysis results show that fishermen directly catch fish with high economic efficiency, but processing enterprises and, especially retailers, are the agents to obtain higher added value. Processing enterprises play a role that is not really clear due to outdated processing technology; in contrast, retailers have the highest added value. This shows that the added value of the fish supply chain at Quang Binh fishing port is still limited, leading to low output quality. Therefore, the selling price of fish to the market is still high compared to the abundant fish resources, leading to low consumption and limiting exports due to the quality of processing enterprises. This reduces demand and fishing capacity, and productivity is lower than potential. To improve the fish value chain at fishing ports, it is necessary to focus on improving product quality, strengthening linkages between actors, building brands and product consumption markets at the same time, improving the capacity of export processing enterprises.

Keywords: Quang Binh fishing port, value chain, market, distributions channel

Procedia PDF Downloads 73
4834 Recent Development on Application of Microwave Energy on Process Metallurgy

Authors: Mamdouh Omran, Timo Fabritius

Abstract:

A growing interest in microwave heating has emerged recently. Many researchers have begun to pay attention to microwave energy as an alternative technique for processing various primary and secondary raw materials. Compared to conventional methods, microwave processing offers several advantages, such as selective heating, rapid heating, and volumetric heating. The present study gives a summary on our recent works related to the use of microwave energy for the recovery of valuable metals from primary and secondary raw materials. The research is mainly focusing on: Application of microwave for the recovery and recycling of metals from different metallurgical industries wastes (i.e. electric arc furnace (EAF) dust, blast furnace (BF), basic oxygen furnace (BOF) sludge). Application of microwave for upgrading and recovery of valuable metals from primary raw materials (i.e. iron ore). The results indicated that microwave heating is a promising and effective technique for processing primary and secondary steelmaking wastes. After microwave treatment of iron ore for 60 s and 900 W, about a 28.30% increase in grindability.Wet high intensity magnetic separation (WHIMS) indicated that the magnetic separation increased from 34% to 98% after microwave treatment for 90 s and 900 W. In the case of EAF dust, after microwave processing at 1100 W for 20 min, Zinc removal from 64 % to ~ 97 %, depending on mixture ratio and treatment time.

Keywords: dielectric properties, microwave heating, raw materials, secondary raw materials

Procedia PDF Downloads 95
4833 Tumor Detection of Cerebral MRI by Multifractal Analysis

Authors: S. Oudjemia, F. Alim, S. Seddiki

Abstract:

This paper shows the application of multifractal analysis for additional help in cancer diagnosis. The medical image processing is a very important discipline in which many existing methods are in search of solutions to real problems of medicine. In this work, we present results of multifractal analysis of brain MRI images. The purpose of this analysis was to separate between healthy and cancerous tissue of the brain. A nonlinear method based on multifractal detrending moving average (MFDMA) which is a generalization of the detrending fluctuations analysis (DFA) is used for the detection of abnormalities in these images. The proposed method could make separation of the two types of brain tissue with success. It is very important to note that the choice of this non-linear method is due to the complexity and irregularity of tumor tissue that linear and classical nonlinear methods seem difficult to characterize completely. In order to show the performance of this method, we compared its results with those of the conventional method box-counting.

Keywords: irregularity, nonlinearity, MRI brain images, multifractal analysis, brain tumor

Procedia PDF Downloads 443
4832 A Multi-Criteria Model for Scheduling of Stochastic Single Machine Problem with Outsourcing and Solving It through Application of Chance Constrained

Authors: Homa Ghave, Parmis Shahmaleki

Abstract:

This paper presents a new multi-criteria stochastic mathematical model for a single machine scheduling with outsourcing allowed. There are multiple jobs processing in batch. For each batch, all of job or a quantity of it can be outsourced. The jobs have stochastic processing time and lead time and deterministic due dates arrive randomly. Because of the stochastic inherent of processing time and lead time, we use the chance constrained programming for modeling the problem. First, the problem is formulated in form of stochastic programming and then prepared in a form of deterministic mixed integer linear programming. The objectives are considered in the model to minimize the maximum tardiness and outsourcing cost simultaneously. Several procedures have been developed to deal with the multi-criteria problem. In this paper, we utilize the concept of satisfaction functions to increases the manager’s preference. The proposed approach is tested on instances where the random variables are normally distributed.

Keywords: single machine scheduling, multi-criteria mathematical model, outsourcing strategy, uncertain lead times and processing times, chance constrained programming, satisfaction function

Procedia PDF Downloads 264
4831 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks

Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin

Abstract:

Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.

Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network

Procedia PDF Downloads 138
4830 The Residual Effects of Special Merchandising Sections on Consumers' Shopping Behavior

Authors: Shih-Ching Wang, Mark Lang

Abstract:

This paper examines the secondary effects and consequences of special displays on subsequent shopping behavior. Special displays are studied as a prominent form of in-store or shopper marketing activity. Two experiments are performed using special value and special quality-oriented displays in an online simulated store environment. The impact of exposure to special displays on mindsets and resulting product choices are tested in a shopping task. Impact on store image is also tested. The experiments find that special displays do trigger shopping mindsets that affect product choices and shopping basket composition and value. There are intended and unintended positive and negative effects found. Special value displays improve store price image but trigger a price sensitive shopping mindset that causes more lower-priced items to be purchased, lowering total basket dollar value. Special natural food displays improve store quality image and trigger a quality-oriented mindset that causes fewer lower-priced items to be purchased, increasing total basket dollar value. These findings extend the theories of product categorization, mind-sets, and price sensitivity found in communication research into the retail store environment. Findings also warn retailers to consider the total effects and consequences of special displays when designing and executing in-store or shopper marketing activity.

Keywords: special displays, mindset, shopping behavior, price consciousness, product categorization, store image

Procedia PDF Downloads 283
4829 Translation Directionality: An Eye Tracking Study

Authors: Elahe Kamari

Abstract:

Research on translation process has been conducted for more than 20 years, investigating various issues and using different research methodologies. Most recently, researchers have started to use eye tracking to study translation processes. They believed that the observable, measurable data that can be gained from eye tracking are indicators of unobservable cognitive processes happening in the translators’ mind during translation tasks. The aim of this study was to investigate directionality in translation processes through using eye tracking. The following hypotheses were tested: 1) processing the target text requires more cognitive effort than processing the source text, in both directions of translation; 2) L2 translation tasks on the whole require more cognitive effort than L1 tasks; 3) cognitive resources allocated to the processing of the source text is higher in L1 translation than in L2 translation; 4) cognitive resources allocated to the processing of the target text is higher in L2 translation than in L1 translation; and 5) in both directions non-professional translators invest more cognitive effort in translation tasks than do professional translators. The performance of a group of 30 male professional translators was compared with that of a group of 30 male non-professional translators. All the participants translated two comparable texts one into their L1 (Persian) and the other into their L2 (English). The eye tracker measured gaze time, average fixation duration, total task length and pupil dilation. These variables are assumed to measure the cognitive effort allocated to the translation task. The data derived from eye tracking only confirmed the first hypothesis. This hypothesis was confirmed by all the relevant indicators: gaze time, average fixation duration and pupil dilation. The second hypothesis that L2 translation tasks requires allocation of more cognitive resources than L1 translation tasks has not been confirmed by all four indicators. The third hypothesis that source text processing requires more cognitive resources in L1 translation than in L2 translation and the fourth hypothesis that target text processing requires more cognitive effort in L2 translation than L1 translation were not confirmed. It seems that source text processing in L2 translation can be just as demanding as in L1 translation. The final hypothesis that non-professional translators allocate more cognitive resources for the same translation tasks than do the professionals was partially confirmed. One of the indicators, average fixation duration, indicated higher cognitive effort-related values for professionals.

Keywords: translation processes, eye tracking, cognitive resources, directionality

Procedia PDF Downloads 463
4828 Video Processing of a Football Game: Detecting Features of a Football Match for Automated Calculation of Statistics

Authors: Rishabh Beri, Sahil Shah

Abstract:

We have applied a range of filters and processing in order to extract out the various features of the football game, like the field lines of a football field. Another important aspect was the detection of the players in the field and tagging them according to their teams distinguished by their jersey colours. This extracted information combined about the players and field helped us to create a virtual field that consists of the playing field and the players mapped to their locations in it.

Keywords: Detect, Football, Players, Virtual

Procedia PDF Downloads 331
4827 Changing Body Ideals of Ethnically Diverse Gay and Heterosexual Men and the Proliferation of Social and Entertainment Media

Authors: Cristina Azocar, Ivana Markova

Abstract:

A survey of 565 male undergraduates examined the effects of exposure to social networking sites and entertainment media on young men’s body image. Exposure to social and to entertainment media was found to have negative effects on men’s body satisfaction, social comparison, and thin ideal internalization. Findings indicated significant differences in those men who were more exposed to social and to entertainment media than those who were not as exposed. Consistent with past studies, gay men were found to be more dissatisfied with their bodies than straight men. Gay men compared themselves to other better-looking individuals and internalized ideal body types seen in media significantly more than their straight counterparts. Surprisingly, straight men seem to care as much about their physical attractiveness/appearance as gay men do, but only in public settings such as at the beach, at athletic events (including gyms) and social events. Although on average ethnic groups were more similar than different, small but significant differences occurred with Asian men indicating significantly higher body dissatisfaction than White/European men and Middle Eastern/Arab men their counterparts. The study increases our knowledge about SNS and entertainment use and its associated body image, and body satisfaction affects among low-income ethnic minority men.

Keywords: body dissatisfaction, body image, entertainment media, gay men, race and ethnicity, social economic status, social comparison, social media

Procedia PDF Downloads 133
4826 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 125
4825 Relationship among Teams' Information Processing Capacity and Performance in Information System Projects: The Effects of Uncertainty and Equivocality

Authors: Ouafa Sakka, Henri Barki, Louise Cote

Abstract:

Uncertainty and equivocality are defined in the information processing literature as two task characteristics that require different information processing responses from managers. As uncertainty often stems from a lack of information, addressing it is thought to require the collection of additional data. On the other hand, as equivocality stems from ambiguity and a lack of understanding of the task at hand, addressing it is thought to require rich communication between those involved. Past research has provided weak to moderate empirical support to these hypotheses. The present study contributes to this literature by defining uncertainty and equivocality at the project level and investigating their moderating effects on the association between several project information processing constructs and project performance. The information processing constructs considered are the amount of information collected by the project team, and the richness and frequency of formal communications among the team members to discuss the project’s follow-up reports. Data on 93 information system development (ISD) project managers was collected in a questionnaire survey and analyzed it via the Fisher Test for correlation differences. The results indicate that the highest project performance levels were observed in projects characterized by high uncertainty and low equivocality in which project managers were provided with detailed and updated information on project costs and schedules. In addition, our findings show that information about user needs and technical aspects of the project is less useful to managing projects where uncertainty and equivocality are high. Further, while the strongest positive effect of interactive use of follow-up reports on performance occurred in projects where both uncertainty and equivocality levels were high, its weakest effect occurred when both of these were low.

Keywords: uncertainty, equivocality, information processing model, management control systems, project control, interactive use, diagnostic use, information system development

Procedia PDF Downloads 294
4824 Nostalgia in Photographed Books for Children – the Case of Photography Books of Children in the Kibbutz

Authors: Ayala Amir

Abstract:

The paper presents interdisciplinary research which draws on the literary study and the cultural study of photography to explore a literary genre defined by nostalgia – the photographed book for children. This genre, which was popular in the second half of the 20th century, presents the romantic, nostalgic image of childhood created in the visual arts in the 18th century (as suggested by Ann Higonnet). At the same time, it capitalizes on the nostalgia inherent in the event of photography as formulated by Jennifer Green-Lewis: photography frames a moment in the present while transforming it into a past longed for in the future. Unlike Freudian melancholy, nostalgia is an effect that enables representation by acknowledging the loss and containing it in the very experience of the object. The representation and preservation of the lost object (nature, childhood, innocence) are in the center of the genre of children's photography books – a modern version of ancient pastoral. In it, the unique synergia of word and image results in a nostalgic image of childhood in an era already conquered by modernization. The nostalgic effect works both in the representation of space – an Edenic image of nature already shadowed by its demise, and of time – an image of childhood imbued by what Gill Bartholnyes calls the "looking backward aesthetics" – under the sign of loss. Little critical attention has been devoted to this genre with the exception of the work of Bettina Kümmerling-Meibauer, who noted the nostalgic effect of the well-known series of photography books by Astrid Lindgren and Anna Riwkin-Brick. This research aims to elaborate Kümmerling-Meibauer's approach using the theories of the study of photography, word-image studies, as well as current studies of childhood. The theoretical perspectives are implemented in the case study of photography books created in one of the most innovative social structures in our time – the Israeli Kibbutz. This communal way of life designed a society where children will experience their childhood in a parentless rural environment that will save them from the fate of the Oedipal fall. It is suggested that in documenting these children in a fictional format, photographers and writers, images and words cooperated in creating nostalgic works situated on the border between nature and culture, imagination and reality, utopia and its realization in history.

Keywords: nostalgia, photography , childhood, children's books, kibutz

Procedia PDF Downloads 142
4823 Application of Hyperspectral Remote Sensing in Sambhar Salt Lake, A Ramsar Site of Rajasthan, India

Authors: Rajashree Naik, Laxmi Kant Sharma

Abstract:

Sambhar lake is the largest inland Salt Lake of India, declared as a Ramsar site on 23 March 1990. Due to high salinity and alkalinity condition its biodiversity richness is contributed by haloalkaliphilic flora and fauna along with the diverse land cover including waterbody, wetland, salt crust, saline soil, vegetation, scrub land and barren land which welcome large number of flamingos and other migratory birds for winter harboring. But with the gradual increase in the irrational salt extraction activities, the ecological diversity is at stake. There is an urgent need to assess the ecosystem. Advanced technology like remote sensing and GIS has enabled to look into the past, compare with the present for the future planning and management of the natural resources in a judicious way. This paper is a research work intended to present a vegetation in typical inland lake environment of Sambhar wetland using satellite data of NASA’s EO-1 Hyperion sensor launched in November 2000. With the spectral range of 0.4 to 2.5 micrometer at approximately 10nm spectral resolution with 242 bands 30m spatial resolution and 705km orbit was used to produce a vegetation map for a portion of the wetland. The vegetation map was tested for classification accuracy with a pre-existing detailed GIS wetland vegetation database. Though the accuracy varied greatly for different classes the algal communities were successfully identified which are the major sources of food for flamingo. The results from this study have practical implications for uses of spaceborne hyperspectral image data that are now becoming available. Practical limitations of using these satellite data for wetland vegetation mapping include inadequate spatial resolution, complexity of image processing procedures, and lack of stereo viewing.

Keywords: Algal community, NASA’s EO-1 Hyperion, salt-tolerant species, wetland vegetation mapping

Procedia PDF Downloads 135
4822 An Efficient Discrete Chaos in Generalized Logistic Maps with Applications in Image Encryption

Authors: Ashish Ashish

Abstract:

In the last few decades, the discrete chaos of difference equations has gained a massive attention of academicians and scholars due to its tremendous applications in each and every branch of science, such as cryptography, traffic control models, secure communications, weather forecasting, and engineering. In this article, a generalized logistic discrete map is established and discrete chaos is reported through period doubling bifurcation, period three orbit and Lyapunov exponent. It is interesting to see that the generalized logistic map exhibits superior chaos due to the presence of an extra degree of freedom of an ordered parameter. The period doubling bifurcation and Lyapunov exponent are demonstrated for some particular values of parameter and the discrete chaos is determined in the sense of Devaney's definition of chaos theoretically as well as numerically. Moreover, the study discusses an extended chaos based image encryption and decryption scheme in cryptography using this novel system. Surprisingly, a larger key space for coding and more sensitive dependence on initial conditions are examined for encryption and decryption of text messages, images and videos which secure the system strongly from external cyber attacks, coding attacks, statistic attacks and differential attacks.

Keywords: chaos, period-doubling, logistic map, Lyapunov exponent, image encryption

Procedia PDF Downloads 151
4821 The Artificial Intelligence Technologies Used in PhotoMath Application

Authors: Tala Toonsi, Marah Alagha, Lina Alnowaiser, Hala Rajab

Abstract:

This report is about the Photomath app, which is an AI application that uses image recognition technology, specifically optical character recognition (OCR) algorithms. The (OCR) algorithm translates the images into a mathematical equation, and the app automatically provides a step-by-step solution. The application supports decimals, basic arithmetic, fractions, linear equations, and multiple functions such as logarithms. Testing was conducted to examine the usage of this app, and results were collected by surveying ten participants. Later, the results were analyzed. This paper seeks to answer the question: To what level the artificial intelligence features are accurate and the speed of process in this app. It is hoped this study will inform about the efficiency of AI in Photomath to the users.

Keywords: photomath, image recognition, app, OCR, artificial intelligence, mathematical equations.

Procedia PDF Downloads 171
4820 A New 3D Shape Descriptor Based on Multi-Resolution and Multi-Block CS-LBP

Authors: Nihad Karim Chowdhury, Mohammad Sanaullah Chowdhury, Muhammed Jamshed Alam Patwary, Rubel Biswas

Abstract:

In content-based 3D shape retrieval system, achieving high search performance has become an important research problem. A challenging aspect of this problem is to find an effective shape descriptor which can discriminate similar shapes adequately. To address this problem, we propose a new shape descriptor for 3D shape models by combining multi-resolution with multi-block center-symmetric local binary pattern operator. Given an arbitrary 3D shape, we first apply pose normalization, and generate a set of multi-viewed 2D rendered images. Second, we apply Gaussian multi-resolution filter to generate several levels of images from each of 2D rendered image. Then, overlapped sub-images are computed for each image level of a multi-resolution image. Our unique multi-block CS-LBP comes next. It allows the center to be composed of m-by-n rectangular pixels, instead of a single pixel. This process is repeated for all the 2D rendered images, derived from both ‘depth-buffer’ and ‘silhouette’ rendering. Finally, we concatenate all the features vectors into one dimensional histogram as our proposed 3D shape descriptor. Through several experiments, we demonstrate that our proposed 3D shape descriptor outperform the previous methods by using a benchmark dataset.

Keywords: 3D shape retrieval, 3D shape descriptor, CS-LBP, overlapped sub-images

Procedia PDF Downloads 445
4819 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit

Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira

Abstract:

Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.

Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing

Procedia PDF Downloads 143
4818 Nutritional Potential and Functionality of Whey Powder Influenced by Different Processing Temperature and Storage

Authors: Zarmina Gillani, Nuzhat Huma, Aysha Sameen, Mulazim Hussain Bukhari

Abstract:

Whey is an excellent food ingredient owing to its high nutritive value and its functional properties. However, composition of whey varies depending on composition of milk, processing conditions, processing method, and its whey protein content. The aim of this study was to prepare a whey powder from raw whey and to determine the influence of different processing temperatures (160 and 180 °C) on the physicochemical, functional properties during storage of 180 days and on whey protein denaturation. Results have shown that temperature significantly (P < 0.05) affects the pH, acidity, non-protein nitrogen (NPN), protein total soluble solids, fat and lactose contents. Significantly (p < 0.05) higher foaming capacity (FC), foam stability (FS), whey protein nitrogen index (WPNI), and a lower turbidity and solubility index (SI) were observed in whey powder processed at 160 °C compared to whey powder processed at 180 °C. During storage of 180 days, slow but progressive changes were noticed on the physicochemical and functional properties of whey powder. Reverse phase-HPLC analysis revealed a significant (P < 0.05) effect of temperature on whey protein contents. Denaturation of β-Lactoglobulin is followed by α-lacalbumin, casein glycomacropeptide (CMP/GMP), and bovine serum albumin (BSA).

Keywords: whey powder, temperature, denaturation, reverse phase, HPLC

Procedia PDF Downloads 299
4817 Predictive Modelling Approaches in Food Processing and Safety

Authors: Amandeep Sharma, Digvaijay Verma, Ruplal Choudhary

Abstract:

Food processing is an activity across the globe that help in better handling of agricultural produce, including dairy, meat, and fish. The operations carried out in the food industry includes raw material quality authenticity; sorting and grading; processing into various products using thermal treatments – heating, freezing, and chilling; packaging; and storage at the appropriate temperature to maximize the shelf life of the products. All this is done to safeguard the food products and to ensure the distribution up to the consumer. The approaches to develop predictive models based on mathematical or statistical tools or empirical models’ development has been reported for various milk processing activities, including plant maintenance and wastage. Recently AI is the key factor for the fourth industrial revolution. AI plays a vital role in the food industry, not only in quality and food security but also in different areas such as manufacturing, packaging, and cleaning. A new conceptual model was developed, which shows that smaller sample size as only spectra would be required to predict the other values hence leads to saving on raw materials and chemicals otherwise used for experimentation during the research and new product development activity. It would be a futuristic approach if these tools can be further clubbed with the mobile phones through some software development for their real time application in the field for quality check and traceability of the product.

Keywords: predictive modlleing, ann, ai, food

Procedia PDF Downloads 82
4816 Correlation Mapping for Measuring Platelet Adhesion

Authors: Eunseop Yeom

Abstract:

Platelets can be activated by the surrounding blood flows where a blood vessel is narrowed as a result of atherosclerosis. Numerous studies have been conducted to identify the relation between platelets activation and thrombus formation. To measure platelet adhesion, this study proposes an image analysis technique. Blood samples are delivered in the microfluidic channel, and then platelets are activated by a stenotic micro-channel with 90% severity. By applying proposed correlation mapping, which visualizes decorrelation of the streaming blood flow, the area of adhered platelets (APlatelet) was estimated without labeling platelets. In order to evaluate the performance of correlation mapping on the detection of platelet adhesion, the effect of tile size was investigated by calculating 2D correlation coefficients with binary images obtained by manual labeling and the correlation mapping method with different sizes of the square tile ranging from 3 to 50 pixels. The maximum 2D correlation coefficient is observed with the optimum tile size of 5×5 pixels. As the area of the platelet adhesion increases, the platelets plug the channel and there is only a small amount of blood flows. This image analysis could provide new insights for better understanding of the interactions between platelet aggregation and blood flows in various physiological conditions.

Keywords: platelet activation, correlation coefficient, image analysis, shear rate

Procedia PDF Downloads 335