Search results for: bird song processing
2911 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 1042910 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 1202909 Natural Gas Flow Optimization Using Pressure Profiling and Isolation Techniques
Authors: Syed Tahir Shah, Fazal Muhammad, Syed Kashif Shah, Maleeha Gul
Abstract:
In recent days, natural gas has become a relatively clean and quality source of energy, which is recovered from deep wells by expensive drilling activities. The recovered substance is purified by processing in multiple stages to remove the unwanted/containments like dust, dirt, crude oil and other particles. Mostly, gas utilities are concerned with essential objectives of quantity/quality of natural gas delivery, financial outcome and safe natural gas volumetric inventory in the transmission gas pipeline. Gas quantity and quality are primarily related to standards / advanced metering procedures in processing units/transmission systems, and the financial outcome is defined by purchasing and selling gas also the operational cost of the transmission pipeline. SNGPL (Sui Northern Gas Pipelines Limited) Pakistan has a wide range of diameters of natural gas transmission pipelines network of over 9125 km. This research results in answer a few of the issues in accuracy/metering procedures via multiple advanced gadgets for gas flow attributes after being utilized in the transmission system and research. The effects of good pressure management in transmission gas pipeline network in contemplation to boost the gas volume deposited in the existing network and finally curbing gas losses UFG (Unaccounted for gas) for financial benefits. Furthermore, depending on the results and their observation, it is directed to enhance the maximum allowable working/operating pressure (MAOP) of the system to 1235 PSIG from the current round about 900 PSIG, such that the capacity of the network could be entirely utilized. In gross, the results depict that the current model is very efficient and provides excellent results in the minimum possible time.Keywords: natural gas, pipeline network, UFG, transmission pack, AGA
Procedia PDF Downloads 932908 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 182907 Copywriting and the Creative Edge
Authors: Dandeswar Bisoyi, Preeti Yadav, Utpal Barua
Abstract:
This study address particular way that verbal information can affect the processing of positive and interesting qualities which help in making the brand attractive to the consumer. Also, it address the development of a communication strategy which is a very important part of the marketing plan we have to take into account many factors. Out of all the product strengths, the strategy has to outline one marked differential which will drive our brand. This is the fundamental base on which the entire creative strategy will be big idea-based.Keywords: copy writing, advertisement, marketing, branding, recall
Procedia PDF Downloads 5792906 Prenatal Exposure to Organophosphate Pesticide and Fetal Growth
Authors: Yi-Shuan ShaoShao, Yen-An Tsai, Chia-Huang Chang, Kai-Wei Liao, Ming-Song Tsai, Mei-Lien Chen
Abstract:
Organophosphate pesticides (OPs) is an environmental hormone with proven endocrine-disrupting effects that may affect the growth and development in human. A large amount of organophosphate pesticides (OPs) is used throughout Taiwan, and human may be exposed through dietary intake or residential use. During pregnancy, OPs can be transferred to the blood stream reaching the fetus through the placenta. The aim of this study was to explore the association between maternal OPs exposure levels and fetal developments and birth outcomes. A birth cohort was follow-up. Maternal urine sample were collected at the first, second, and third gestational trimester. Fetal growth characteristics were measured by ultrasonic scan and birth outcomes were assessed by pediatrician. Urinary metabolite of organophosphate pesticides were assessed using gas chromatography-mass spectrometry. The analytes included dimethylphosphate (DMP), dimethylthiophosphate (DMTP), dimethyldithiophosphates (DMDTP), diethylphosphate (DEP), diethylthiophosphate (DETP), and diethyldithiophosphate (DEDTP). We found that all of urine samples in each trimester were detected at least one metabolite for dialkyl phosphate (DAP). The detection rate range of OP urinary metabolites were from the lowest 22% DEDTP to the highest 100% DMP and DMTP. And to compared geometric means (GM) of urinary metabolites with three trimesters, that third trimester had the highest concentration for DMPs, DEPs, and DAPs in pregnant women were 368.01, 169.85 and 543.75 nmol/g creatinine, respectively. We observed that DAPs concentration in first and second trimester were significantly negative association with head circumference. DMPs in first trimester was significantly negative association with thoracic circumference (p=0.05) by spearman correlation. Our results support associations with prenatal OPs exposure with fetal head circumference and thoracic circumference. It provided that maternal OPs exposure might affect birth outcomes. Thus, prenatal exposure to OPs and health risk worthy of attention and concern.Keywords: DAPs, birth outcomes, organophosphate pesticides, prenatal
Procedia PDF Downloads 3392905 Hot Deformability of Si-Steel Strips Containing Al
Authors: Mohamed Yousef, Magdy Samuel, Maha El-Meligy, Taher El-Bitar
Abstract:
The present work is dealing with 2% Si-steel alloy. The alloy contains 0.05% C as well as 0.85% Al. The alloy under investigation would be used for electrical transformation purposes. A heating (expansion) - cooling (contraction) dilation investigation was executed to detect the a, a+g, and g transformation temperatures at the inflection points of the dilation curve. On heating, primary a was detected at a temperature range between room temperature and 687 oC. The domain of a+g was detected in the range between 687 oC and 746 oC. g phase exists in the closed g region at the range between 746 oC and 1043 oC. The domain of a phase appears again at a temperature range between 1043 and 1105 oC, and followed by secondary a at temperature higher than 1105 oC. A physical simulation of thermo-mechanical processing on the as-cast alloy was carried out. The simulation process took into consideration the hot flat rolling pilot plant parameters. The process was executed on the thermo-mechanical simulator (Gleeble 3500). The process was designed to include seven consecutive passes. The 1st pass represents the roughing stage, while the remaining six passes represent finish rolling stage. The whole process was executed at the temperature range from 1100 oC to 900 oC. The amount of strain starts with 23.5% at the roughing pass and decreases continuously to reach 7.5 % at the last finishing pass. The flow curve of the alloy can be abstracted from the stress-strain curves representing simulated passes. It shows alloy hardening from a pass to the other up to pass no. 6, as a result of decreasing the deformation temperature and increasing of cumulative strain. After pass no. 6, the deformation process enhances the dynamic recrystallization phenomena to appear, where the z-parameter would be high.Keywords: si- steel, hot deformability, critical transformation temperature, physical simulation, thermo-mechanical processing, flow curve, dynamic softening.
Procedia PDF Downloads 2442904 Heavy Metal Contents in Vegetable Oils of Kazakhstan Origin and Life Risk Assessment
Authors: A. E. Mukhametov, M. T. Yerbulekova, D. R. Dautkanova, G. A. Tuyakova, G. Aitkhozhayeva
Abstract:
The accumulation of heavy metals in food is a constant problem in many parts of the world. Vegetable oils are widely used, both for cooking and for processing in the food industry, meeting the main dietary requirements. One of the main chemical pollutants, heavy metals, is usually found in vegetable oils. These chemical pollutants are carcinogenic, teratogenic and immunotoxic, harmful to consumption and have a negative effect on human health even in trace amounts. Residues of these substances can easily accumulate in vegetable oil during cultivation, processing and storage. In this article, the content of the concentration of heavy metal ions in vegetable oils of Kazakhstan production is studied: sunflower, rapeseed, safflower and linseed oil. Heavy metals: arsenic, cadmium, lead and nickel, were determined in three repetitions by the method of flame atomic absorption. Analysis of vegetable oil samples revealed that the largest lead contamination (Pb) was determined to be 0.065 mg/kg in linseed oil. The content of cadmium (Cd) in the largest amount of 0.009 mg/kg was found in safflower oil. Arsenic (As) content was determined in rapeseed and safflower oils at 0.003 mg/kg, and arsenic (As) was not detected in linseed and sunflower oil. The nickel (Ni) content in the largest amount of 0.433 mg/kg was in linseed oil. The heavy metal contents in the test samples complied with the requirements of regulatory documents for vegetable oils. An assessment of the health risk of vegetable oils with a daily consumption of 36 g per day shows that all samples of vegetable oils produced in Kazakhstan are safe for consumption. But further monitoring is needed, since all these metals are toxic and their harmful effects become apparent only after several years of exposure.Keywords: vegetable oil, sunflower oil, linseed oil, safflower oil, toxic metals, food safety, rape oil
Procedia PDF Downloads 1312903 Information Extraction for Short-Answer Question for the University of the Cordilleras
Authors: Thelma Palaoag, Melanie Basa, Jezreel Mark Panilo
Abstract:
Checking short-answer questions and essays, whether it may be paper or electronic in form, is a tiring and tedious task for teachers. Evaluating a student’s output require wide array of domains. Scoring the work is often a critical task. Several attempts in the past few years to create an automated writing assessment software but only have received negative results from teachers and students alike due to unreliability in scoring, does not provide feedback and others. The study aims to create an application that will be able to check short-answer questions which incorporate information extraction. Information extraction is a subfield of Natural Language Processing (NLP) where a chunk of text (technically known as unstructured text) is being broken down to gather necessary bits of data and/or keywords (structured text) to be further analyzed or rather be utilized by query tools. The proposed system shall be able to extract keywords or phrases from the individual’s answers to match it into a corpora of words (as defined by the instructor), which shall be the basis of evaluation of the individual’s answer. The proposed system shall also enable the teacher to provide feedback and re-evaluate the output of the student for some writing elements in which the computer cannot fully evaluate such as creativity and logic. Teachers can formulate, design, and check short answer questions efficiently by defining keywords or phrases as parameters by assigning weights for checking answers. With the proposed system, teacher’s time in checking and evaluating students output shall be lessened, thus, making the teacher more productive and easier.Keywords: information extraction, short-answer question, natural language processing, application
Procedia PDF Downloads 4262902 The Use of Political Savviness in Dealing with Workplace Ostracism: A Social Information Processing Perspective
Authors: Amy Y. Wang, Eko L. Yi
Abstract:
Can vicarious experiences of workplace ostracism affect employees’ willingness to voice? Given the increasingly interdependent nature of the modern workplace in which employees rely on social interactions to fulfill organizational goals, workplace ostracism –the extent to which an individual perceives that he or she is ignored or excluded by others in the workplace– has garnered significant interest from scholars and practitioners alike. Extending beyond conventional studies that largely focus on the perspectives and outcomes of ostracized targets, we address the indirect effects of workplace ostracism on third-party employees embedded in the same social context. Using a social information processing approach, we propose that the ostracism of coworkers acts as political information that influences third-party employees in their decisions to engage in risky and discretionary behaviors such as employee voice. To make sense of and to navigate through experiences of workplace ostracism, we posit that both political understanding and political skill allow third party employees to minimize the risks and uncertainty of voicing. This conceptual model was tested by a study involving 154 supervisor-subordinate dyads of a publicly listed bio-technology firm located in Mainland China. Each supervisor and their direct subordinates composed of a work team; each team had a minimum of two subordinates and a maximum of four subordinates. Human resources used the master list to distribute the ID coded questionnaires to the matching names. All studied constructs were measured using existing scales proved effective in previous literature. Hypotheses were tested using Confirmatory Factor Analysis and Hierarchal Multiple Regression. All three hypotheses were supported which showed that employees were less likely to engage in voice behaviors when their coworkers reported having experienced ostracism in the workplace. Results also showed a significant three-way interaction between political understanding and political skill on the relationship between coworkers’ ostracism and employee voice, indicating that political savviness is a valuable resource in mitigating ostracism’s negative and indirect effects. Our results illustrated that an employee’s coworkers being ostracized indeed adversely impacted his or her own voice behavior. However, not all individuals reacted passively to the social context; rather, we found that politically savvy individuals – possessing both political understanding and political skill – and their voice behaviors were less impacted by ostracism in their work environment. At the same time, we found that having only political understanding or only political skill was significantly less effective in mitigating ostracism’s negative effects, suggesting a necessary duality of political knowledge and political skill in combatting ostracism. Organizational implications, recommendations, and future research ideas are also discussed.Keywords: employee voice, organizational politics, social information processing, workplace ostracism
Procedia PDF Downloads 1392901 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 2692900 Portuguese Guitar Strings Characterization and Comparison
Authors: P. Serrão, E. Costa, A. Ribeiro, V. Infante
Abstract:
The characteristic sonority of the Portuguese guitar is in great part what makes Fado so distinguishable from other traditional song styles. The Portuguese guitar is a pear-shaped plucked chordophone with six courses of double strings. This study compares the two types of plain strings available for Portuguese guitar and used by the musicians. One is stainless steel spring wire, the other is high carbon spring steel (music wire). Some musicians mention noticeable differences in sound quality between these two string materials, such as a little more brightness and sustain in the steel strings. Experimental tests were performed to characterize string tension at pitch; mechanical strength and tuning stability using the universal testing machine; dimensional control and chemical composition analysis using the scanning electron microscope. The string dynamical behaviour characterization experiments, including frequency response, inharmonicity, transient response, damping phenomena and were made in a monochord test set-up designed and built in-house. Damping factor was determined for the fundamental frequency. As musicians are able to detect very small damping differences, an accurate a characterization of the damping phenomena for all harmonics was necessary. With that purpose, another improved monochord was set and a new system identification methodology applied. Due to the complexity of this task several adjustments were necessary until obtaining good experimental data. In a few cases, dynamical tests were repeated to detect any evolution in damping parameters after break-in period when according to players experience a new string sounds gradually less dull until reaching the typically brilliant timbre. Finally, each set of strings was played on one guitar by a distinguished player and recorded. The recordings which include individual notes, scales, chords and a study piece, will be analysed to potentially characterize timbre variations.Keywords: damping factor, music wire, portuguese guitar, string dynamics
Procedia PDF Downloads 5502899 Molecular Dynamics Simulation Studies of Thermal Effects Created by High-Intensity, Ultra-Short Pulses Induced Cell Membrane Electroporation
Authors: Jiahui Song
Abstract:
The use of electric fields with high intensity (~ 100kV/cm or higher) and ultra short pulse durations (nanosecond range) has been a recent development. Most of the studies of electroporation have ignored possible thermal effects because of the small duration of the applied voltage pulses. However, it has been predicted membrane temperature gradients ranging from 0.2×109 to 109 K/m. This research focuses on thermal effects that drive for electroporative enhancements, even though the actual temperature values might not have changed appreciably from their equilibrium levels. The dynamics of pore formation with the application of an externally applied electric field is studied on the basis of molecular dynamics (MD) simulations using the GROMACS package. MD simulations of a lipid layer with constant electric field strength of 0.5 V/nm at 25 °C and 47 °C are implemented to simulate the appropriate thermal effects. The GROMACS provides the force fields for the lipid membranes, which is taken to comprise of dipalmitoyl-phosphatidyl-choline (DPPC) molecules. The water model mimicks the aqueous environment surrounding the membrane. Velocities of water and membrane molecules are generated randomly at each simulation run according to a Maxwellian distribution. The high background electric field is typically used in MD simulations to probe electroporation. It serves as an accelerated test of the pore formation process since low electric fields would take inordinately long simulation time. MD simulation shows no pore is formed in a 1-ns snapshot for a DPPC membrane set at a temperature of 25°C after a 0.5 V/nm electric field is applied. A nano-sized pore is clearly seen in a 0.75-ns snapshot on the same geometry, but with the membrane surfaces kept at temperatures of 47°C. And the pore increases at 1 ns. The MD simulation results suggest the possibility that the increase in temperature can result in different degrees of electrically stimulated bio-effects. The results points to the role of thermal effects in facilitating and accelerating the electroporation process.Keywords: high-intensity, ultra-short, electroporation, thermal effects, molecular dynamics
Procedia PDF Downloads 502898 Tribological Properties of Non-Stick Coatings Used in Bread Baking Process
Authors: Maurice Brogly, Edwige Privas, Rajesh K. Gajendran, Sophie Bistac
Abstract:
Anti-sticky coatings based on perfluoroalkoxy (PFA) coatings are widely used in food processing industry especially for bread making. Their tribological performance, such as low friction coefficient, low surface energy and high heat resistance, make them an appropriate choice for anti-sticky coating application in moulds for food processing industry. This study is dedicated to evidence the transfer of contaminants from the coating due to wear and thermal ageing of the mould. The risk of contamination is induced by the damage of the coating by bread crust during the demoulding stage. The study focuses on the wear resistance and potential transfer of perfluorinated polymer from the anti-sticky coating. Friction between perfluorinated coating and bread crust is modeled by a tribological pin-on-disc test. The cellular nature of the bread crust is modeled by a polymer foam. FTIR analysis of the polymer foam after friction allow the evaluation of the transfer from the perfluorinated coating to polymer foam. Influence of thermal ageing on the physical, chemical and wear properties of the coating are also investigated. FTIR spectroscopic results show that the increase of PFA transfer onto the foam counterface is associated to the decrease of the friction coefficient. Increasing lubrication by film transfer results in the decrease of the friction coefficient. Moreover increasing the friction test parameters conditions (load, speed and sliding distance) also increase the film transfer onto the counterface. Thermal ageing increases the hydrophobic character of the PFA coating and thus also decreases the friction coefficient.Keywords: fluorobased polymer coatings, FTIR spectroscopy, non-stick food moulds, wear and friction
Procedia PDF Downloads 3292897 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System
Authors: Min Hae Song, Jooyong Park
Abstract:
Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format
Procedia PDF Downloads 1662896 The Current Level of Shared Decision-Making in Head-And-Neck Oncology: An Exploratory Study – Preliminary Results
Authors: Anne N. Heirman, Song Duimel, Rob van Son, Lisette van der Molen, Richard Dirven, Gyorgi B. Halmos, Julia van Weert, Michiel W.M. van den Brekel
Abstract:
Objectives: Treatments for head-neck cancer are drastic and often significantly impact the quality of life and appearance of patients. Shared decision-making (SDM) beholds a collaboration between patient and doctor in which the most suitable treatment can be chosen by integrating patient preferences, values, and medical information. SDM has a lot of advantages that would be useful in making difficult treatment choices. The objective of this study was to determine the current level of SDM among patients and head-and-neck surgeons. Methods: Consultations of patients with a non-cutaneous head-and-neck malignancy facing a treatment decision were selected and included. If given informed consent, the consultation was recorded with an audio recorder, and the patient and surgeon filled in a questionnaire immediately after the consultation. The SDM level of the consultation was scored objectively by independent observers who judged audio recordings of the consultation using the OPTION5-scale, ranging from 0% (no SDM) to 100% (optimum SDM), as well as subjectively by patients (using the SDM-Q-9 and Control preference scale) and clinicians (SDM-Q-Doc, modified control preference scale) percentages. Preliminary results: Five head-neck surgeons have each at least seven recorded conversations with different patients. One of them was trained in SDM. The other four had no experience with SDM. Most patients were male (74%), and oropharyngeal carcinoma was the most common diagnosis (41%), followed by oral cancer (33%). Five patients received palliative treatment of which two patients were not treated recording guidelines. At this moment, all recordings are scored by the two independent observers. Analysis of the results will follow soon. Conclusion: The current study will determine to what extent there is a discrepancy between the objective and subjective level of shared decision-making (SDM) during a doctor-patient consultation in Head-and-Neck surgery. The results of the analysis will follow shortly.Keywords: head-and-neck oncology, patient involvement, physician-patient relations, shared decision making
Procedia PDF Downloads 932895 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web
Authors: Aayushi Somani, Siba P. Samal
Abstract:
Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR
Procedia PDF Downloads 1682894 Investigating the English Speech Processing System of EFL Japanese Older Children
Authors: Hiromi Kawai
Abstract:
This study investigates the nature of EFL older children’s L2 perceptive and productive abilities using classroom data, in order to find a pedagogical solution to the teaching of L2 sounds at an early stage of learning in a formal school setting. It is still inconclusive whether older children with only EFL formal school instruction at the initial stage of L2 learning are able to attain native-like perception and production in English within the very limited amount of exposure to the target language available. Based on the notion of the lack of study of EFL Japanese children’s acquisition of English segments, the researcher uses a model of L1 speech processing which was developed for investigating L1 English children’s speech and literacy difficulties using a psycholinguistic framework. The model is composed of input channel, output channel, and lexical representation, and examines how a child receives information from spoken or written language, remembers and stores it within the lexical representations and how the child selects and produces spoken or written words. Concerning language universality and language specificity in the language acquisitional process, the aim of finding any sound errors in L1 English children seemed to conform to the author’s intention to find abilities of English sounds in older Japanese children at the novice level of English in an EFL setting. 104 students in Grade 5 (between the ages of 10 and 11 years old) of an elementary school in Tokyo participated in this study. Four tests to measure their perceptive ability and three oral repetition tests to measure their productive ability were conducted with/without reference to lexical representation. All the test items were analyzed to calculate item facility (IF) indices, and correlational analyses and Structural Equation Modeling (SEM) were conducted to examine the relationship between the receptive ability and the productive ability. IF analysis showed that (1) the participants were better at perceiving a segment than producing a segment, (2) they had difficulty in auditory discrimination of paired consonants when one of them does not exist in the Japanese inventory, (3) they had difficulty in both perceiving and producing English vowels, and (4) their L1 loan word knowledge had an influence on their ability to perceive and produce L2 sounds. The result of the Multiple Regression Modeling showed that the two production tests could predict the participants’ auditory ability of real words in English. The result of SEM showed that the hypothesis that perceptive ability affects productive ability was supported. Based on these findings, the author discusses the possible explicit method of teaching English segments to EFL older children in a formal school setting.Keywords: EFL older children, english segments, perception, production, speech processing system
Procedia PDF Downloads 2432893 Electrospun Membrane doped with Gold Nanorods for Surface-Enhanced Raman Sepctroscopy
Authors: Ziwei Wang, Andrea Lucotti, Luigi Brambilla, Matteo Tommasini, Chiara Bertarelli
Abstract:
Surface-enhanced Raman Spectroscopy (SERS) is a highly sensitive detection that provides abundant information on low concentration analytes from various researching areas. Based on localized surface plasmon resonance, metal nanostructures including gold, silver and copper have been investigated as SERS substrate during recent decades. There has been increasing more attention of exploring good performance, homogenous, repeatable SERS substrates. Here, we show that electrospinning, which is an inexpensive technique to fabricate large-scale, self-standing and repeatable membranes, can be effectively used for producing SERS substrates. Nanoparticles and nanorods are added to the feed electrospinning solution to collect functionalized polymer fibrous mats. We report stable electrospun membranes as SERS substrate using gold nanorods (AuNRs) and poly(vinyl alcohol). Particularly, a post-processing crosslinking step using glutaraldehyde under acetone environment was carried out to the electrospun membrane. It allows for using the membrane in any liquid environment, including water, which is of interest both for sensing of contaminant in wastewater, as well as for biosensing. This crosslinked AuNRs/PVA membrane has demonstrated excellent performance as SERS substrate for low concentration 10-6 M Rhodamine 6G (Rh6G) aqueous solution. This post-processing for fabricating SERS substrate is the first time reported and proved through Raman imaging of excellent stability and outstanding performance. Finally, SERS tests have been applied to several analytes, and the application of AuNRs/PVA membrane is broadened by removing the detected analyte by rinsing. Therefore, this crosslinked AuNRs/PVA membrane is re-usable.Keywords: SERS spectroscopy, electrospinning, crosslinking, composite materials
Procedia PDF Downloads 1392892 Teaching Practices for Subverting Significant Retentive Learner Errors in Arithmetic
Authors: Michael Lousis
Abstract:
The systematic identification of the most conspicuous and significant errors made by learners during three-years of testing of their progress in learning Arithmetic throughout the development of the Kassel Project in England and Greece was accomplished. How much retentive these errors were over three-years in the officially provided school instruction of Arithmetic in these countries has also been shown. The learners’ errors in Arithmetic stemmed from a sample, which was comprised of two hundred (200) English students and one hundred and fifty (150) Greek students. The sample was purposefully selected according to the students’ participation in each testing session in the development of the three-year project, in both domains simultaneously in Arithmetic and Algebra. Specific teaching practices have been invented and are presented in this study for subverting these learners’ errors, which were found out to be retentive to the level of the nationally provided mathematical education of each country. The invention and the development of these proposed teaching practices were founded on the rationality of the theoretical accounts concerning the explanation, prediction and control of the errors, on the conceptual metaphor and on an analysis, which tried to identify the required cognitive components and skills of the specific tasks, in terms of Psychology and Cognitive Science as applied to information-processing. The aim of the implementation of these instructional practices is not only the subversion of these errors but the achievement of the mathematical competence, as this was defined to be constituted of three elements: appropriate representations - appropriate meaning - appropriately developed schemata. However, praxis is of paramount importance, because there is no independent of science ‘real-truth’ and because praxis serves as quality control when it takes the form of a cognitive method.Keywords: arithmetic, cognitive science, cognitive psychology, information-processing paradigm, Kassel project, level of the nationally provided mathematical education, praxis, remedial mathematical teaching practices, retentiveness of errors
Procedia PDF Downloads 3152891 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models
Authors: Danielle Shackley, Yetunde Folajimi
Abstract:
As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model
Procedia PDF Downloads 962890 Studies on the Histomorphometry of the Digestive Tract and Associated Digestive Glands in Ostrich (Struthio camelus) with Gender and Progressing Age in Pakistan
Authors: Zaima Umar, Anas S. Qureshi, Adeel Sarfraz, Saqib Umar, Talha Umar, Muhammad Usman
Abstract:
Ostrich has been a good source of food and income for people across the world. To get a better understanding of health and health-related problems, the knowledge of its digestive system is of utmost importance. The present study was conducted to determine the morphological and histometrical variations in the digestive system and associated glands of ostrich (Struthio camelus) as regard to the gender and progressive age. A total of 40 apparently healthy ostriches of both genders and two progressive age groups; young one (less than two year, group A); and adult (2-15 years, group B) in equal number were used in this study. Digestive organs including tongue, esophagus, proventriculus, gizzard, small and large intestines and associated glands like liver and pancreas were collected immediately after slaughtering the birds. The organs of the digestive system and associated glands of each group were studied grossly and histologically. Grossly colour, shape consistency, weight and various dimensions (length, width, and circumference) of organs of the digestive tract and associated glands were recorded. The mean (± SEM) of all gross anatomical parameters in group A were significantly (p ≤ 0.01) different from that of group B. For microscopic studies, 1-2 cm tissue samples of organs of the digestive system and associated glands were taken. The tissue was marked and fixed in the neutral buffer formaldehyde solution for histological studies. After fixation, the sections of 5-7 µm were cut and stained by haematoxylin and eosin stain. All the layers (epithelium, lamina propria, lamina muscularis, submucosa and tunica muscularis) were measured (µm) with the help of automated computer software Image J®. The results of this study provide valuable information on the gender and age-related histological and histometrical variations in the digestive organs of ostrich (Struthio camelus). The microscopic studies of different parts of the digestive system revealed highly significant differences (p ≤ 0.01) among the two groups. The esophagus was lined by non-keratinized stratified squamous epithelium. The duodenum, jejunum, and ileum showed similar histological structures. Statistical analysis revealed significant (p ≤ 0.05) increase in the thickness of different tunics of the gastrointestinal tract in adult birds (up to 15 years) as compared with young ones (less than two years). Therefore, it can be concluded that there is a gradual but consistent growth in the observed digestive organs mimicking that of other poultry species and may be helpful in determining the growth pattern in this bird. However, there is a need to record the changes at closer time intervals.Keywords: ostrich, digestive system, histomorphometry, grossly
Procedia PDF Downloads 1442889 Development of the Integrated Quality Management System of Cooked Sausage Products
Authors: Liubov Lutsyshyn, Yaroslava Zhukova
Abstract:
Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».Keywords: cooked sausage products, HACCP, quality management, safety assurance
Procedia PDF Downloads 2462888 Design and Development of Fleet Management System for Multi-Agent Autonomous Surface Vessel
Authors: Zulkifli Zainal Abidin, Ahmad Shahril Mohd Ghani
Abstract:
Agent-based systems technology has been addressed as a new paradigm for conceptualizing, designing, and implementing software systems. Agents are sophisticated systems that act autonomously across open and distributed environments in solving problems. Nevertheless, it is impractical to rely on a single agent to do all computing processes in solving complex problems. An increasing number of applications lately require multiple agents to work together. A multi-agent system (MAS) is a loosely coupled network of agents that interact to solve problems that are beyond the individual capacities or knowledge of each problem solver. However, the network of MAS still requires a main system to govern or oversees the operation of the agents in order to achieve a unified goal. We had developed a fleet management system (FMS) in order to manage the fleet of agents, plan route for the agents, perform real-time data processing and analysis, and issue sets of general and specific instructions to the agents. This FMS should be able to perform real-time data processing, communicate with the autonomous surface vehicle (ASV) agents and generate bathymetric map according to the data received from each ASV unit. The first algorithm is developed to communicate with the ASV via radio communication using standard National Marine Electronics Association (NMEA) protocol sentences. Next, the second algorithm will take care of the path planning, formation and pattern generation is tested using various sample data. Lastly, the bathymetry map generation algorithm will make use of data collected by the agents to create bathymetry map in real-time. The outcome of this research is expected can be applied on various other multi-agent systems.Keywords: autonomous surface vehicle, fleet management system, multi agent system, bathymetry
Procedia PDF Downloads 2692887 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence
Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello
Abstract:
Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care
Procedia PDF Downloads 742886 An Analysis of Learners’ Reports for Measuring Co-Creational Education
Authors: Takatoshi Ishii, Koji Kimita, Keiichi Muramatsu, Yoshiki Shimomura
Abstract:
To increase the quality of learning, teacher and learner need mutual effort for realization of educational value. For this purpose, we need to manage the co-creational education among teacher and learners. In this research, we try to find a feature of co-creational education. To be more precise, we analyzed learners’ reports by natural language processing, and extract some features that describe the state of the co-creational education.Keywords: co-creational education, e-portfolios, ICT integration, latent dirichlet allocation
Procedia PDF Downloads 6202885 An EBSD Investigation of Ti-6Al-4Nb Alloy Processed by Plan Strain Compression Test
Authors: Anna Jastrzebska, K. S. Suresh, T. Kitashima, Y. Yamabe-Mitarai, Z. Pakiela
Abstract:
Near α titanium alloys are important materials for aerospace applications, especially in high temperature applications such as jet engine. Mechanical properties of Ti alloys strongly depends on their processing route, then it is very important to understand micro-structure change by different processing. In our previous study, Nb was found to improve oxidation resistance of Ti alloys. In this study, micro-structure evolution of Ti-6Al-4Nb (wt %) alloy was investigated after plain strain compression test in hot working temperatures in the α and β phase region. High-resolution EBSD was successfully used for precise phase and texture characterization of this alloy. 1.1 kg of Ti-6Al-4Nb ingot was prepared using cold crucible levitation melting. The ingot was subsequently homogenized in 1050 deg.C for 1h followed by cooling in the air. Plate like specimens measuring 10×20×50 mm3 were cut from an ingot by electrical discharge machining (EDM). The plain strain compression test using an anvil with 10 x 35 mm in size was performed with 3 different strain rates: 0.1s-1, 1s-1and 10s-1 in 700 deg.C and 1050 deg.C to obtain 75% of deformation. The micro-structure was investigated by scanning electron microscopy (SEM) equipped with electron backscatter diffraction (EBSD) detector. The α/β phase ratio and phase morphology as well as the crystallographic texture, subgrain size, misorientation angles and misorientation gradients corresponding to each phase were determined over the middle and the edge of sample areas. The deformation mechanism in each working temperature was discussed. The evolution of texture changes with strain rate was investigated. The micro-structure obtained by plain strain compression test was heterogeneous with a wide range of grain sizes. This is because deformation and dynamic recrystallization occurred during deformation at temperature in the α and β phase. It was strongly influenced by strain rate.Keywords: EBSD, plain strain compression test, Ti alloys
Procedia PDF Downloads 3792884 Quality Analysis of Vegetables Through Image Processing
Authors: Abdul Khalique Baloch, Ali Okatan
Abstract:
The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria
Procedia PDF Downloads 682883 Disaster Response Training Simulator Based on Augmented Reality, Virtual Reality, and MPEG-DASH
Authors: Sunho Seo, Younghwan Shin, Jong-Hong Park, Sooeun Song, Junsung Kim, Jusik Yun, Yongkyun Kim, Jong-Moon Chung
Abstract:
In order to effectively cope with large and complex disasters, disaster response training is needed. Recently, disaster response training led by the ROK (Republic of Korea) government is being implemented through a 4 year R&D project, which has several similar functions as the HSEEP (Homeland Security Exercise and Evaluation Program) of the United States, but also has several different features as well. Due to the unpredictiveness and diversity of disasters, existing training methods have many limitations in providing experience in the efficient use of disaster incident response and recovery resources. Always, the challenge is to be as efficient and effective as possible using the limited human and material/physical resources available based on the given time and environmental circumstances. To enable repeated training under diverse scenarios, an AR (Augmented Reality) and VR (Virtual Reality) combined simulator is under development. Unlike existing disaster response training, simulator based training (that allows remote login simultaneous multi-user training) enables freedom from limitations in time and space constraints, and can be repeatedly trained with different combinations of functions and disaster situations. There are related systems such as ADMS (Advanced Disaster Management Simulator) developed by ETC simulation and HLS2 (Homeland Security Simulation System) developed by ELBIT system. However, the ROK government needs a simulator custom made to the country's environment and disaster types, and also combines the latest information and communication technologies, which include AR, VR, and MPEG-DASH (Moving Picture Experts Group - Dynamic Adaptive Streaming over HTTP) technology. In this paper, a new disaster response training simulator is proposed to overcome the limitation of existing training systems, and adapted to actual disaster situations in the ROK, where several technical features are described.Keywords: augmented reality, emergency response training simulator, MPEG-DASH, virtual reality
Procedia PDF Downloads 2992882 A New Direction of Urban Regeneration: Form-Based Urban Reconstruction through the Idea of Bricolage
Authors: Hyejin Song, Jin Baek
Abstract:
Based on the idea of bricolage that a new meaning beyond that of each of objects can be created through combination and juxtaposition of various objets, this study finds a way of morphological-recomposing of urban space through combination and juxtaposition of existing urban fabric and new fabric and suggests this idea as new direction of urban regeneration. This study sets concept of bricolage as a philosophical ground of interpreting contemporary urban situation. In this concept, urban objects such as buildings from various zeitgeists are positively considered as potential textures which can construct meaningful context. Seoul, as the city having long history and experiencing colonization and development, appears dynamic urban structure full of various objects from various periods. However, in contrast with successful plazas and streets in Europe, objects in Seoul do not make a meaningful context as public space due to thoughtless development. This study defines this situation as ‘disorgnized-fabric’. Following the concept of bricolage, to find the way for those existing scattered objects to be organized as a context of meaningful public space, this study firstly researches the case of successful public space by morphological analysis. Secondly, this study carefully explores urban space in Seoul, and draws figure-ground diagram to grasp the form of current urban fabric by various urban-objects. As a result of exploration, a lot of urban spaces from Myeong-dong, one of vibrant commercial district in Seoul, to declining residential area are judged as having potential fabric which can become meaningful context by just small adjustment of relationship between existing objects. This study also confirmed that by inserting a new object with consideration of form of existing fabric, it is possible to accord a new context as plaza to existing void which have broken as several parts. This study defines it as form-based urban reconstruction through the idea of bricolage, and suggests that it could be one of philosophical ground of successful urban regeneration.Keywords: adjustment of relationship between existing objets, bricolage, morphological analysis of urban fabric, urban regeneration, urban reconstruction
Procedia PDF Downloads 317