Search results for: extracting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 397

Search results for: extracting

157 The Communication Between Visual Aesthetic Criteria of Product with User Experience and Social Sustainability: A Study of Street Furniture

Authors: Hassan Sadeghi Naeini, Mozhgan Sabzehparvar, Mahdiye Jafarnezhad, Neda Brumandi, Mohammad Parsa Sabzehparvar

Abstract:

This study aims to discover the relationship between the factors of aesthetics, user experience, and social sustainability concerning the design of street furniture and the impact of these factors on the emotional arousal of citizens to encourage and make them prefer to use street furniture. The method used in this research included extracting indicators related to each of the factors of aesthetics, user experience, and social sustainability from the articles and then selecting indicators related to the purpose of the research in consultation with industrial design experts and architects. Finally, 9 variables for aesthetics, 7 variables for user experience, and 5 variables for evaluating social sustainability were selected. To identify the effect of each of these factors on street furniture and to recognize their relationship with each other. A 10-scale prioritization questionnaire, from 1, the least amount of importance, to 10, the most amount of importance, was answered by architects and industrial designers on the “Pors Line” online platform for three consecutive weeks, and a total of 82 people answered the questionnaire. The results showed that by using aesthetic factors in the design of street furniture and having a positive impact on users’ experience of using the product, we could expect the occurrence of behavioral factors, such as creating constructive interaction and product acceptance so that the satisfaction of the user in the use of street furniture and optimal interaction in the urban environment is formed, followed by that, the requirements of social sustainability will be met.

Keywords: visual aesthetic, user experience, social sustainability, street furniture

Procedia PDF Downloads 92
156 Research on the United Navigation Mechanism of Land, Sea and Air Targets under Multi-Sources Information Fusion

Authors: Rui Liu, Klaus Greve

Abstract:

The navigation information is a kind of dynamic geographic information, and the navigation information system is a kind of special geographic information system. At present, there are many researches on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing is not deeply applied into the research of navigation information service. And the imperfection of navigation target coordination and navigation information sharing mechanism under certain navigation tasks has greatly affected the reliability and scientificity of navigation service such as path planning. Considering this, the project intends to study the multi-source information fusion and multi-objective united navigation information interaction mechanism: first of all, investigate the actual needs of navigation users in different areas, and establish the preliminary navigation information classification and importance level model; and then analyze the characteristics of the remote sensing and GIS vector data, and design the fusion algorithm from the aspect of improving the positioning accuracy and extracting the navigation environment data. At last, the project intends to analyze the feature of navigation information of the land, sea and air navigation targets, and design the united navigation data standard and navigation information sharing model under certain navigation tasks, and establish a test navigation system for united navigation simulation experiment. The aim of this study is to explore the theory of united navigation service and optimize the navigation information service model, which will lay the theory and technology foundation for the united navigation of land, sea and air targets.

Keywords: information fusion, united navigation, dynamic path planning, navigation information visualization

Procedia PDF Downloads 282
155 Evaluation of Residual Stresses in Human Face as a Function of Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.

Keywords: finite element method, growth, residual stress, soft tissue

Procedia PDF Downloads 266
154 Contemporary Vision of Islamic Motifs in Decorating Products

Authors: Shuruq Ghazi Nahhas

Abstract:

Islamic art is a decorative art that depends on repeating motifs in various shapes to cover different surfaces. Each motif has its own characteristics and style that may reflect different Islamic periods, such as Umayyad, Abbasid, Fatimid, Seljuk, Nasrid, Ottoman, and Safavid. These periods were the most powerful periods which played an important role in developing the Islamic motifs. Most of these motifs of the Islamic heritage were not used in new applications. This research focused on reviving the vegetal Islamic motifs found on Islamic heritage and redesign them in a new format to decorate various products, including scarfs, cushions, coasters, wallpaper, wall art, and boxes. The scarf is chosen as one element of these decorative products because it is used as accessories to add aesthetic value to fashion. A descriptive-analytical method is used for this research. The process started with extracting and analyzing the original motifs. Then, creating the new motifs by simplifying, deleting, or adding elements based on the original structure. Then, creating repeated patterns and applying them to decorative products. The findings of this research indicated: repeating patterns based on different structures creates unlimited patterns. Also, changing the elements of the motifs of a pattern adds new characteristics to the pattern. Also, creating frames using elements from the repeated motifs adds aesthetic and contemporary value to decorative products. Finally, using various methods of combining colors creates unlimited variations of each pattern. At the end, reviving the Islamic motifs in contemporary vision enriches decorative products with aesthetic, artistic, and historical values of different Islamic periods. This makes the decorative products valuable that adds uniqueness to their surroundings.

Keywords: Islamic motifs, contemporary patterns, scarfs, decorative products

Procedia PDF Downloads 157
153 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams

Procedia PDF Downloads 416
152 The Effects of Extraction Methods on Fat Content and Fatty Acid Profiles of Marine Fish Species

Authors: Yesim Özogul, Fethiye Takadaş, Mustafa Durmus, Yılmaz Ucar, Ali Rıza Köşker, Gulsun Özyurt, Fatih Özogul

Abstract:

It has been well documented that polyunsaturated fatty acids (PUFAs), especially eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) have beneficial effects on health, regarding prevention of cardiovascular diseases, cancer and autoimmune disorders, development the brain and retina and treatment of major depressive disorder etc. Thus, an adequate intake of omega PUFA is essential and generally marine fish are the richest sources of PUFA in human diet. Thus, this study was conducted to evaluate the efficiency of different extraction methods (Bligh and Dyer, soxhlet, microwave and ultrasonics) on the fat content and fatty acid profiles of marine fish species (Mullus babatus, Upeneus moluccensis, Mullus surmuletus, Anguilla anguilla, Pagellus erythrinus and Saurida undosquamis). Fish species were caught by trawl in Mediterranean Sea and immediately iced. After that, fish were transported to laboratory in ice and stored at -18oC in a freezer until the day of analyses. After extracting lipid from fish by different methods, lipid samples were converted to their constituent fatty acid methyl esters. The fatty acid composition was analysed by a GC Clarus 500 with an autosampler (Perkin Elmer, Shelton, CT, USA) equipped with a flame ionization detector and a fused silica capillary SGE column (30 m x 0.32 mm ID x 0.25 mm BP20 0.25 UM, USA). The results showed that there were significant differences (P < 0.05) in fatty acids of all species and also extraction methods affected fat contents and fatty acid profiles of fish species.

Keywords: extraction methods, fatty acids, marine fish, PUFA

Procedia PDF Downloads 264
151 Iris Cancer Detection System Using Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision.

Keywords: iris cancer, intraocular melanoma, cancerous, prewitt edge detection algorithm, sclera

Procedia PDF Downloads 501
150 An Online Corpus-Based Bilingual Collocations Dictionary for Second/Foreign Language Learners

Authors: Adriane Orenha-Ottaiano

Abstract:

Collocations are conventionalized, recurrent and arbitrary lexical combinations. Due to the fact that they are highly specific for a particular language and may be contextually restricted, collocations pose a problem to EFL/ESL learners with regard to production or encoding. Taking that into account, the compilation of monolingual and bilingual collocations dictionaries for the referred audience is highly crucial and significant. Thus, the aim of this paper is to discuss the importance of the compilation of an Online Corpus-based Bilingual Collocations Dictionary, in the English-Portuguese and Portuguese-English directions. On a first phase, with the use of WordSmith Tools, the collocations were extracted from a Translation Learner Corpus (TLC), a parallel corpus made up of university students’ translations in the Portuguese-English direction, with approximately 100,000 words. In a second stage, based on the keywords analyzed from the TLC, more collocational patterns were extracted using the Sketch Engine. In order to include more collocations as well as to ensure dictionary users will have access to more frequent and recurrent collocations, we also use the frequency list from The Corpus of Contemporary American English, with the purpose of extracting more patterns. The dictionary focuses on all types of collocations (verbal, noun, adjectival and adverbial collocations), in order to help the referred audience use them more accurately and productively – so far the dictionary has more than 330 entries, and more than 3,500 collocations extracted. The idea of having the proposed dictionary in online format may allow to incorporate more qualitatively and quantitatively collocational information. Besides, more examples may be included, different from conventional printed collocations dictionaries. Being the first bilingual collocations dictionary in the aforementioned directions, it is hoped to achieve the challenge of meeting learners’ collocational needs as the collocations have been selected according to learners’ difficulties regarding the use of collocations.

Keywords: Corpus-Based Collocations Dictionary, Collocations , Bilingual Collocations Dictionary, Collocational Patterns

Procedia PDF Downloads 308
149 Optimization of Extraction Conditions and Characteristics of Scale collagen From Sardine: Sardina pilchardus

Authors: F. Bellali, M. Kharroubi, M. Loutfi, N.Bourhim

Abstract:

In Morocco, fish processing industry is an important source income for a large amount of byproducts including skins, bones, heads, guts and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Scales from Sardina plichardus resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic and bio medical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. Moreover, the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The basic principle of RSM is to determinate model equations that describe interrelations between the independent variables and the dependent variables.

Keywords: Sardina pilchardus, scales, valorization, collagen extraction, response surface methodology

Procedia PDF Downloads 410
148 Investigation of Type and Concentration Effects of Solvent on Chemical Properties of Saffron Edible Extract

Authors: Sharareh Mohseni

Abstract:

Purpose: The objective of this study was to find a suitable solvent to produce saffron edible extract with improved chemical properties. Design/methodology/approach: Dried and pulverized stigmas of C. sativus L. (10g) was extracted with 300 ml of solvents including: distillated water (DW), ethanol/DW, methanol/DW, propylene glycol/DW, heptan/DW, and hexan/DW, for 3 days at 25°C and then centrifuged at 3000 rpm. Then the extracts were evaporated using rotary evaporator at 40°C. The fiber and solvent-free extracts were then analyzed by UV spectrophotometer to detect saffron quality parameters including crocin, picrocrocin and safranal. Findings: Distilled water/ethanol mixture as the extraction solvent, caused larger amounts of the plant constituents to diffuse out to the extract compared to other treatments and also control. Polar solvents including distilled water, ethanol, and propylene glycol (except methanol) were more effective in extracting crocin, picrocrocin, and saffranal than non-polar solvents. Social implications: Due to an enhancement of color and flavor, saffron extract is economical compared to natural saffron. Saffron Extract saves on preparation time and reduces the amount of saffron required for imparting the same flavor, as compared to dry saffron. Liquid extract is easier to use and standardize in food preparations compared to dry stamens and can be dosed precisely compared to natural saffron. Originality/value: No research had been done on production of saffron edible extract using the solvent studied in this survey. The novelty of this research is high and the results can be used industrially.

Keywords: Crocus sativus L., saffron extract, solvent extraction, distilled water

Procedia PDF Downloads 444
147 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: airborne laser scanning, digital terrain models, filtering, forested areas

Procedia PDF Downloads 135
146 Ultrasonic Extraction of Phenolics from Leaves of Shallots and Peels of Potatoes for Biofortification of Cheese

Authors: Lila Boulekbache-Makhlouf, Fatiha Brahmi

Abstract:

This study was carried out with the aim of enriching fresh cheese with the food by-products, which are the leaves of shallots and the peels of potatoes. Firstly, the conditions for extracting the total polyphenols using ultrasound are optimized. Then, the contents of total polyphenols PPT , flavonoids and antioxidant activity were evaluated for the extracts obtained by adopting the optimal parameter. On the other hand, we have carried out some physicochemical, microbiological and sensory analyzes of the cheese produced. The maximum total polyphenols value of 70.44 mg GAE gallic acid equivalent / g of dry matter DM of shallot leaves was reached with 40% (v/v) ethanol, an extraction time of 90 min and a temperature of 10 °C. While, the maximum TPP total polyphenols content of potato peels of 45.03 ± 4.16 mg gallic acid equivalent / g of dry matter DM was obtained using an ethanol /water mixture (40%, v/v), a time of 30 min and a temperature of 60 °C and the flavonoid contents were 13.99 and 7.52 QE quercetin equivalent/g dry matter DM, respectively. From the antioxidant tests, we deduced that the potato peels present a higher antioxidant power with the concentration of extracts causing a 50% inhibition IC50s of 125.42 ± 2.78 μg/mL for 2,2-diphényl 1-picrylhydrazyle DPPH, of 87.21 ± 7.72 μg/mL for phosphomolybdate and 200.77 ± 13.38 μg/mL for iron chelation, compared with the results obtained for shallot leaves which were 204.29 ± 0.09, 45.85 ± 3,46 and 1004.10 ± 145.73 μg/mL, respectively. The results of the physicochemical analyzes have shown that the formulated cheese was compliant with standards. Microbiological analyzes show that the hygienic quality of the cheese produced was satisfactory. According to the sensory analysis, the experts liked the cheese enriched with the powder and pieces of the leaves of the shallots.

Keywords: shallots leaves, potato peels, ultrasound extraction, phenolics, cheese

Procedia PDF Downloads 84
145 Data Analytics in Energy Management

Authors: Sanjivrao Katakam, Thanumoorthi I., Antony Gerald, Ratan Kulkarni, Shaju Nair

Abstract:

With increasing energy costs and its impact on the business, sustainability today has evolved from a social expectation to an economic imperative. Therefore, finding methods to reduce cost has become a critical directive for Industry leaders. Effective energy management is the only way to cut costs. However, Energy Management has been a challenge because it requires a change in old habits and legacy systems followed for decades. Today exorbitant levels of energy and operational data is being captured and stored by Industries, but they are unable to convert these structured and unstructured data sets into meaningful business intelligence. It must be noted that for quick decisions, organizations must learn to cope with large volumes of operational data in different formats. Energy analytics not only helps in extracting inferences from these data sets, but also is instrumental in transformation from old approaches of energy management to new. This in turn assists in effective decision making for implementation. It is the requirement of organizations to have an established corporate strategy for reducing operational costs through visibility and optimization of energy usage. Energy analytics play a key role in optimization of operations. The paper describes how today energy data analytics is extensively used in different scenarios like reducing operational costs, predicting energy demands, optimizing network efficiency, asset maintenance, improving customer insights and device data insights. The paper also highlights how analytics helps transform insights obtained from energy data into sustainable solutions. The paper utilizes data from an array of segments such as retail, transportation, and water sectors.

Keywords: energy analytics, energy management, operational data, business intelligence, optimization

Procedia PDF Downloads 361
144 The Analysis of Deceptive and Truthful Speech: A Computational Linguistic Based Method

Authors: Seham El Kareh, Miramar Etman

Abstract:

Recently, detecting liars and extracting features which distinguish them from truth-tellers have been the focus of a wide range of disciplines. To the author’s best knowledge, most of the work has been done on facial expressions and body gestures but only few works have been done on the language used by both liars and truth-tellers. This paper sheds light on four axes. The first axis copes with building an audio corpus for deceptive and truthful speech for Egyptian Arabic speakers. The second axis focuses on examining the human perception of lies and proving our need for computational linguistic-based methods to extract features which characterize truthful and deceptive speech. The third axis is concerned with building a linguistic analysis program that could extract from the corpus the inter- and intra-linguistic cues for deceptive and truthful speech. The program built here is based on selected categories from the Linguistic Inquiry and Word Count program. Our results demonstrated that Egyptian Arabic speakers on one hand preferred to use first-person pronouns and present tense compared to the past tense when lying and their lies lacked of second-person pronouns, and on the other hand, when telling the truth, they preferred to use the verbs related to motion and the nouns related to time. The results also showed that there is a need for bigger data to prove the significance of words related to emotions and numbers.

Keywords: Egyptian Arabic corpus, computational analysis, deceptive features, forensic linguistics, human perception, truthful features

Procedia PDF Downloads 203
143 Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation

Authors: Soha A. Bahanshal, Vaibhav Verdhan, Bayong Kim

Abstract:

Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time.

Keywords: prediction, decision trees, machine learning, support vector machine, ensemble model, student graduation, GOT graduate on time

Procedia PDF Downloads 69
142 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 271
141 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis

Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha

Abstract:

Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.

Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier

Procedia PDF Downloads 464
140 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning

Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath

Abstract:

The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.

Keywords: BLIP, fMRI, latent diffusion model, neural perception.

Procedia PDF Downloads 64
139 Biosignal Recognition for Personal Identification

Authors: Hadri Hussain, M.Nasir Ibrahim, Chee-Ming Ting, Mariani Idroas, Fuad Numan, Alias Mohd Noor

Abstract:

A biometric security system has become an important application in client identification and verification system. A conventional biometric system is normally based on unimodal biometric that depends on either behavioural or physiological information for authentication purposes. The behavioural biometric depends on human body biometric signal (such as speech) and biosignal biometric (such as electrocardiogram (ECG) and phonocardiogram or heart sound (HS)). The speech signal is commonly used in a recognition system in biometric, while the ECG and the HS have been used to identify a person’s diseases uniquely related to its cluster. However, the conventional biometric system is liable to spoof attack that will affect the performance of the system. Therefore, a multimodal biometric security system is developed, which is based on biometric signal of ECG, HS, and speech. The biosignal data involved in the biometric system is initially segmented, with each segment Mel Frequency Cepstral Coefficients (MFCC) method is exploited for extracting the feature. The Hidden Markov Model (HMM) is used to model the client and to classify the unknown input with respect to the modal. The recognition system involved training and testing session that is known as client identification (CID). In this project, twenty clients are tested with the developed system. The best overall performance at 44 kHz was 93.92% for ECG and the worst overall performance was ECG at 88.47%. The results were compared to the best overall performance at 44 kHz for (20clients) to increment of clients, which was 90.00% for HS and the worst overall performance falls at ECG at 79.91%. It can be concluded that the difference multimodal biometric has a substantial effect on performance of the biometric system and with the increment of data, even with higher frequency sampling, the performance still decreased slightly as predicted.

Keywords: electrocardiogram, phonocardiogram, hidden markov model, mel frequency cepstral coeffiecients, client identification

Procedia PDF Downloads 275
138 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria

Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov

Abstract:

This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.

Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model

Procedia PDF Downloads 57
137 The Weavability of Waste Plants and Their Application in Fashion and Textile Design

Authors: Jichi Wu

Abstract:

The dwindling of resources requires a more sustainable design. New technology could bring new materials and processing techniques to the fashion industry and push it to a more sustainable future. Thus this paper explores cutting-edge researches on the life-cycle of closed-loop products and aims to find innovative ways to recycle and upcycle. For such a goal, the author investigated how low utilization plants and leftover fiber could be turned into ecological textiles in fashion. Through examining the physical and chemical properties (cellulose content/ fiber form) of ecological textiles to explore their wearability, this paper analyzed the prospect of bio-fabrics (weavable plants) in body-oriented fashion design and their potential in sustainable fashion and textile design. By extracting cellulose from 9 different types or sections of plants, the author intends to find an appropriate method (such as ion solution extraction) to mostly increase the weavability of plants, so raw materials could be more effectively changed into fabrics. All first-hand experiment data were carefully collected and then analyzed under the guidance of related theories. The result of the analysis was recorded in detail and presented in an understandable way. Various research methods are adopted through this project, including field trip and experiments to make comparisons and recycle materials. Cross-discipline cooperation is also conducted for related knowledge and theories. From this, experiment data will be collected, analyzed, and interpreted into a description and visualization results. Based on the above conclusions, it is possible to apply weavable plant fibres to develop new textile and fashion.

Keywords: wearable bio-textile, sustainability, economy, ecology, technology, weavability, fashion design

Procedia PDF Downloads 142
136 Evaluation of Commercials by Psychological Changes in Consumers’ Physiological Characteristics

Authors: Motoki Seguchi, Fumiko Harada, Hiromitsu Shimakawa

Abstract:

There have been many local companies in countryside that carefully produce and sell products, which include crafts and foods produced with traditional methods. These companies are likely to use commercials to advertise their products. However, it is difficult for companies to judge whether the commercials they create are having an impact on consumers. Therefore, to create effective commercials, this study researches what kind of gimmicks in commercials affect what kind of consumers. This study proposes a method for extracting psychological change points from the physiological characteristics of consumers while they are watching commercials and estimating the gimmicks in the commercial that affect consumer engagement. In this method, change point detection is applied to pupil size for estimating gimmicks that affect consumers’ emotional engagement, and to EDA for estimating gimmicks that affect cognitive engagement. A questionnaire is also used to estimate the commercials that influence behavioral engagement. As a result of estimating the gimmicks that influence consumer engagement using this method, it was found that there are some common features among the gimmicks. To influence cognitive engagement, it was found that it was useful to include flashback scenes, messages to be appealed to, the company’s name, and the company’s logos as gimmicks. It was also found that flashback scenes and story climaxes were useful in influencing emotional engagement. Furthermore, it was found that the use of storytelling commercials may or may not be useful, depending on which consumers are desired to take which behaviors. It also estimated the gimmicks that influence consumers for each target and found that the useful gimmicks are slightly different for students and working adults. By using this method, it can understand which gimmicks in the commercial affect which engagement of the consumers. Therefore, the results of this study can be used as a reference for the gimmicks that should be included in commercials when companies create their commercials in the future.

Keywords: change point detection, estimating engagement, physiological characteristics, psychological changes, watching commercials

Procedia PDF Downloads 178
135 Fight against Money Laundering with Optical Character Recognition

Authors: Saikiran Subbagari, Avinash Malladhi

Abstract:

Anti Money Laundering (AML) regulations are designed to prevent money laundering and terrorist financing activities worldwide. Financial institutions around the world are legally obligated to identify, assess and mitigate the risks associated with money laundering and report any suspicious transactions to governing authorities. With increasing volumes of data to analyze, financial institutions seek to automate their AML processes. In the rise of financial crimes, optical character recognition (OCR), in combination with machine learning (ML) algorithms, serves as a crucial tool for automating AML processes by extracting the data from documents and identifying suspicious transactions. In this paper, we examine the utilization of OCR for AML and delve into various OCR techniques employed in AML processes. These techniques encompass template-based, feature-based, neural network-based, natural language processing (NLP), hidden markov models (HMMs), conditional random fields (CRFs), binarizations, pattern matching and stroke width transform (SWT). We evaluate each technique, discussing their strengths and constraints. Also, we emphasize on how OCR can improve the accuracy of customer identity verification by comparing the extracted text with the office of foreign assets control (OFAC) watchlist. We will also discuss how OCR helps to overcome language barriers in AML compliance. We also address the implementation challenges that OCR-based AML systems may face and offer recommendations for financial institutions based on the data from previous research studies, which illustrate the effectiveness of OCR-based AML.

Keywords: anti-money laundering, compliance, financial crimes, fraud detection, machine learning, optical character recognition

Procedia PDF Downloads 141
134 The Utilization of Tea Extract within the Realm of the Food Industry

Authors: Raana Babadi Fathipour

Abstract:

Tea, a beverage widely cherished across the globe, has captured the interest of scholars with its recent acknowledgement for possessing noteworthy health advantages. Of particular significance is its proven ability to ward off ailments such as cancer and cardiovascular afflictions. Moreover, within the realm of culinary creations, lipid oxidation poses a significant challenge for food product development. In light of these aforementioned concerns, this present discourse turns its attention towards exploring diverse methodologies employed in extracting polyphenols from various types of tea leaves and examining their utility within the vast landscape of the ever-evolving food industry. Based on the discoveries unearthed in this comprehensive investigation, it has been determined that the fundamental constituents of tea are polyphenols possessed of intrinsic health-enhancing properties. This includes an assortment of catechins, namely epicatechin, epigallocatechin, epicatechin gallate, and epigallocatechin gallate. Moreover, gallic acid, flavonoids, flavonols and theaphlavins have also been detected within this aromatic beverage. Of these myriad components examined vigorously in this study's analysis, catechin emerges as particularly beneficial. Multiple techniques have emerged over time to successfully extract key compounds from tea plants, including solvent-based extraction methodologies, microwave-assisted water extraction approaches and ultrasound-assisted extraction techniques. In particular, consideration is given to microwave-assisted water extraction method as a viable scheme which effectively procures valuable polyphenols from tea extracts. This methodology appears adaptable for implementation within sectors such as dairy production along with meat and oil industries alike.

Keywords: camellia sinensis, extraction, food application, shelf life, tea

Procedia PDF Downloads 66
133 A Facile Nanocomposite of Graphene Oxide Reinforced Chitosan/Poly-Nitroaniline Polymer as a Highly Efficient Adsorbent for Extracting Polycyclic Aromatic Hydrocarbons from Tea Samples

Authors: Adel M. Al-Shutairi, Ahmed H. Al-Zahrani

Abstract:

Tea is a popular beverage drunk by millions of people throughout the globe. Tea has considerable health advantages, in-cluding antioxidant, antibacterial, antiviral, chemopreventive, and anticarcinogenic properties. As a result of environmental pollution (atmospheric deposition) and the production process, tealeaves may also include a variety of dangerous substances, such as polycyclic aromatic hydrocarbons (PAHs). In this study, graphene oxide reinforced chitosan/poly-nitroaniline polymer was prepared to develop a sensitive and reliable solid phase extraction method (SPE) for extraction of PAH7 in tea samples, followed by high-performance liquid chromatography- fluorescence detection. The prepared adsorbent was validated in terms of linearity, the limit of detection, the limit of quantification, recovery (%), accuracy (%), and precision (%) for the determination of the PAH7 (benzo[a]pyrene, benzo[a]anthracene, benzo[b]fluoranthene, chrysene, benzo[b]fluoranthene, Dibenzo[a,h]anthracene and Benzo[g,h,i]perylene) in tea samples. The concentration was determined in two types of tea commercially available in Saudi Arabia, including black tea and green tea. The maximum mean of Σ7PAHs in black tea samples was 68.23 ± 0.02 ug kg-1 and 26.68 ± 0.01 ug kg-1 in green tea samples. The minimum mean of Σ7PAHs in black tea samples was 37.93 ± 0.01 ug kg-1 and 15.26 ± 0.01 ug kg-1 in green tea samples. The mean value of benzo[a]pyrene in black tea samples ranged from 6.85 to 12.17 ug kg-1, where two samples exceeded the standard level (10 ug kg-1) established by the European Union (UE), while in green tea ranged from 1.78 to 2.81 ug kg-1. Low levels of Σ7PAHs in green tea samples were detected in comparison with black tea samples.

Keywords: polycyclic aromatic hydrocarbons, CS, PNA and GO, black/green tea, solid phase extraction, Saudi Arabia

Procedia PDF Downloads 93
132 Chemical Study of Volatile Organic Compounds (VOCS) from Xylopia aromatica (LAM.) Mart (Annonaceae)

Authors: Vanessa G. P. Severino, JOÃO Gabriel M. Junqueira, Michelle N. G. do Nascimento, Francisco W. B. Aquino, João B. Fernandes, Ana P. Terezan

Abstract:

The scientific interest in analyzing VOCs represents a significant modern research field as a result of importance in most branches of the present life and industry. Therefore it is extremely important to investigate, identify and isolate volatile substances, since they can be used in different areas, such as food, medicine, cosmetics, perfumery, aromatherapy, pesticides, repellents and other household products through methods for extracting volatile constituents, such as solid phase microextraction (SPME), hydrodistillation (HD), solvent extraction (SE), Soxhlet extraction, supercritical fluid extraction (SFE), stream distillation (SD) and vacuum distillation (VD). The Chemometrics is an area of chemistry that uses statistical and mathematical tools for the planning and optimization of the experimental conditions, and to extract relevant chemical information multivariate chemical data. In this context, the focus of this work was the study of the chemical VOCs by SPME of the specie X. aromatica, in search of constituents that can be used in the industrial sector as well as in food, cosmetics and perfumery, since these areas industrial has a considerable role. In addition, by chemometric analysis, we sought to maximize the answers of this research, in order to search for the largest number of compounds. The investigation of flowers from X. aromatica in vitro and in alive mode proved consistent, but certain factors supposed influence the composition of metabolites, and the chemometric analysis strengthened the analysis. Thus, the study of the chemical composition of X. aromatica contributed to the VOCs knowledge of the species and a possible application.

Keywords: chemometrics, flowers, HS-SPME, Xylopia aromatica

Procedia PDF Downloads 357
131 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes

Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo

Abstract:

Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).

Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation

Procedia PDF Downloads 200
130 Fuzzy Control of Thermally Isolated Greenhouse Building by Utilizing Underground Heat Exchanger and Outside Weather Conditions

Authors: Raghad Alhusari, Farag Omar, Moustafa Fadel

Abstract:

A traditional greenhouse is a metal frame agricultural building used for cultivation plants in a controlled environment isolated from external climatic changes. Using greenhouses in agriculture is an efficient way to reduce the water consumption, where agriculture field is considered the biggest water consumer world widely. Controlling greenhouse environment yields better productivity of plants but demands an increase of electric power. Although various control approaches have been used towards greenhouse automation, most of them are applied to traditional greenhouses with ventilation fans and/or evaporation cooling system. Such approaches are still demanding high energy and water consumption. The aim of this research is to develop a fuzzy control system that minimizes water and energy consumption by utilizing outside weather conditions and underground heat exchanger to maintain the optimum climate of the greenhouse. The proposed control system is implemented on an experimental model of thermally isolated greenhouse structure with dimensions of 6x5x2.8 meters. It uses fans for extracting heat from the ground heat exchanger system, motors for automatic open/close of the greenhouse windows and LED as lighting system. The controller is integrated also with environmental condition sensors. It was found that using the air-to-air horizontal ground heat exchanger with 90 mm diameter and 2 mm thickness placed 2.5 m below the ground surface results in decreasing the greenhouse temperature of 3.28 ˚C which saves around 3 kW of consumed energy. It also eliminated the water consumption needed in evaporation cooling systems which are traditionally used for cooling the greenhouse environment.

Keywords: automation, earth-to-air heat exchangers, fuzzy control, greenhouse, sustainable buildings

Procedia PDF Downloads 125
129 Fake News Detection Based on Fusion of Domain Knowledge and Expert Knowledge

Authors: Yulan Wu

Abstract:

The spread of fake news on social media has posed significant societal harm to the public and the nation, with its threats spanning various domains, including politics, economics, health, and more. News on social media often covers multiple domains, and existing models studied by researchers and relevant organizations often perform well on datasets from a single domain. However, when these methods are applied to social platforms with news spanning multiple domains, their performance significantly deteriorates. Existing research has attempted to enhance the detection performance of multi-domain datasets by adding single-domain labels to the data. However, these methods overlook the fact that a news article typically belongs to multiple domains, leading to the loss of domain knowledge information contained within the news text. To address this issue, research has found that news records in different domains often use different vocabularies to describe their content. In this paper, we propose a fake news detection framework that combines domain knowledge and expert knowledge. Firstly, it utilizes an unsupervised domain discovery module to generate a low-dimensional vector for each news article, representing domain embeddings, which can retain multi-domain knowledge of the news content. Then, a feature extraction module uses the domain embeddings discovered through unsupervised domain knowledge to guide multiple experts in extracting news knowledge for the total feature representation. Finally, a classifier is used to determine whether the news is fake or not. Experiments show that this approach can improve multi-domain fake news detection performance while reducing the cost of manually labeling domain labels.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 67
128 Remote Sensing and GIS-Based Environmental Monitoring by Extracting Land Surface Temperature of Abbottabad, Pakistan

Authors: Malik Abid Hussain Khokhar, Muhammad Adnan Tahir, Hisham Bin Hafeez Awan

Abstract:

Continuous environmental determinism and climatic change in the entire globe due to increasing land surface temperature (LST) has become a vital phenomenon nowadays. LST is accelerating because of increasing greenhouse gases in the environment which results of melting down ice caps, ice sheets and glaciers. It has not only worse effects on vegetation and water bodies of the region but has also severe impacts on monsoon areas in the form of capricious rainfall and monsoon failure extensive precipitation. Environment can be monitored with the help of various geographic information systems (GIS) based algorithms i.e. SC (Single), DA (Dual Angle), Mao, Sobrino and SW (Split Window). Estimation of LST is very much possible from digital image processing of satellite imagery. This paper will encompass extraction of LST of Abbottabad using SW technique of GIS and Remote Sensing over last ten years by means of Landsat 7 ETM+ (Environmental Thematic Mapper) and Landsat 8 vide their Thermal Infrared (TIR Sensor) and Optical Land Imager (OLI sensor less Landsat 7 ETM+) having 100 m TIR resolution and 30 m Spectral Resolutions. These sensors have two TIR bands each; their emissivity and spectral radiance will be used as input statistics in SW algorithm for LST extraction. Emissivity will be derived from Normalized Difference Vegetation Index (NDVI) threshold methods using 2-5 bands of OLI with the help of e-cognition software, and spectral radiance will be extracted TIR Bands (Band 10-11 and Band 6 of Landsat 7 ETM+). Accuracy of results will be evaluated by weather data as well. The successive research will have a significant role for all tires of governing bodies related to climate change departments.

Keywords: environment, Landsat 8, SW Algorithm, TIR

Procedia PDF Downloads 352