Search results for: threshold detecting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1543

Search results for: threshold detecting

223 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 404
222 Benefits of Monitoring Acid Sulfate Potential of Coffee Rock (Indurated Sand) across Entire Dredge Cycle in South East Queensland

Authors: S. Albert, R. Cossu, A. Grinham, C. Heatherington, C. Wilson

Abstract:

Shipping trends suggest increasing vessel size and draught visiting Australian ports highlighting potential challenges to port infrastructure and requiring optimization of shipping channels to ensure safe passage for vessels. The Port of Brisbane in Queensland, Australia has an 80 km long access shipping channel which vessels must transit 15 km of relatively shallow coffee rock (generic class of indurated sands where sand grains are bound within an organic clay matrix) outcrops towards the northern passage in Moreton Bay. This represents a risk to shipping channel deepening and maintenance programs as the dredgeability of this material is more challenging due to its high cohesive strength compared with the surrounding marine sands and potential higher acid sulfate risk. In situ assessment of acid sulfate sediment for dredge spoil control is an important tool in mitigating ecological harm. The coffee rock in an anoxic undisturbed state does not pose any acid sulfate risk, however when disturbed via dredging it’s vital to ensure that any present iron sulfides are either insignificant or neutralized. To better understand the potential risk we examined the reduction potential of coffee rock across the entire dredge cycle in order to accurately portray the true outcome of disturbed acid sulfate sediment in dredging operations in Moreton Bay. In December 2014 a dredge trial was undertaken with a trailing suction hopper dredger. In situ samples were collected prior to dredging revealed acid sulfate potential above threshold guidelines which could lead to expensive dredge spoil management. However, potential acid sulfate risk was then monitored in the hopper and subsequent discharge, both showing a significant reduction in acid sulfate potential had occurred. Additionally, the acid neutralizing capacity significantly increased due to the inclusion of shell fragments (calcium carbonate) from the dredge target areas. This clearly demonstrates the importance of assessing potential acid sulfate risk across the entire dredging cycle and highlights the need to carefully evaluate sources of acidity.

Keywords: acid sulfate, coffee rock, indurated sand, dredging, maintenance dredging

Procedia PDF Downloads 368
221 Examining Reading Comprehension Skills Based on Different Reading Comprehension Frameworks and Taxonomies

Authors: Seval Kula-Kartal

Abstract:

Developing students’ reading comprehension skills is an aim that is difficult to accomplish and requires to follow long-term and systematic teaching and assessment processes. In these processes, teachers need tools to provide guidance to them on what reading comprehension is and which comprehension skills they should develop. Due to a lack of clear and evidence-based frameworks defining reading comprehension skills, especially in Turkiye, teachers and students mostly follow various processes in the classrooms without having an idea about what their comprehension goals are and what those goals mean. Since teachers and students do not have a clear view of comprehension targets, strengths, and weaknesses in students’ comprehension skills, the formative feedback processes cannot be managed in an effective way. It is believed that detecting and defining influential comprehension skills may provide guidance both to teachers and students during the feedback process. Therefore, in the current study, some of the reading comprehension frameworks that define comprehension skills operationally were examined. The aim of the study is to develop a simple and clear framework that can be used by teachers and students during their teaching, learning, assessment, and feedback processes. The current study is qualitative research in which documents related to reading comprehension skills were analyzed. Therefore, the study group consisted of recourses and frameworks which made big contributions to theoretical and operational definitions of reading comprehension. A content analysis was conducted on the resources included in the study group. To determine the validity of the themes and sub-categories revealed as the result of content analysis, three educational assessment experts were asked to examine the content analysis results. The Fleiss’ Cappa coefficient revealed that there is consistency among themes and categories defined by three different experts. The content analysis of the reading comprehension frameworks revealed that comprehension skills could be examined under four different themes. The first and second themes focus on understanding information given explicitly or implicitly within a text. The third theme includes skills used by the readers to make connections between their personal knowledge and the information given in the text. Lastly, the fourth theme focus on skills used by readers to examine the text with a critical view. The results suggested that fundamental reading comprehension skills can be examined under four themes. Teachers are recommended to use these themes in their reading comprehension teaching and assessment processes. Acknowledgment: This research is supported by Pamukkale University Scientific Research Unit within the project, whose title is Developing A Reading Comprehension Rubric.

Keywords: reading comprehension, assessing reading comprehension, comprehension taxonomies, educational assessment

Procedia PDF Downloads 82
220 Detecting Potential Geothermal Sites by Using Well Logging, Geophysical and Remote Sensing Data at Siwa Oasis, Western Desert, Egypt

Authors: Amr S. Fahil, Eman Ghoneim

Abstract:

Egypt made significant efforts during the past few years to discover significant renewable energy sources. Regions in Egypt that have been identified for geothermal potential investigation include the Gulf of Suez and the Western Desert. One of the most promising sites for the development of Egypt's Northern Western Desert is Siwa Oasis. The geological setting of the oasis, a tectonically generated depression situated in the northernmost region of the Western desert, supports the potential for substantial geothermal resources. Field data obtained from 27 deep oil wells along the Western Desert included bottom-hole temperature (BHT) depth to basement measurements, and geological maps; data were utilized in this study. The major lithological units, elevation, surface gradient, lineaments density, and remote sensing multispectral and topographic were mapped together to generate the related physiographic variables. Eleven thematic layers were integrated in a geographic information system (GIS) to create geothermal maps to aid in the detection of significant potential geothermal spots along the Siwa Oasis and its vicinity. The contribution of total magnetic intensity data with reduction to the pole (RTP) to the first investigation of the geothermal potential in Siwa Oasis is applied in this work. The integration of geospatial data with magnetic field measurements showed a clear correlation between areas of high heat flow and magnetic anomalies. Such anomalies can be interpreted as related to the existence of high geothermal energy and dense rock, which also have high magnetic susceptibility. The outcomes indicated that the study area has a geothermal gradient ranging from 18 to 42 °C/km, a heat flow ranging from 24.7 to 111.3 m.W. k−1, a thermal conductivity of 1.3–2.65 W.m−1.k−1 and a measured amplitude temperature maximum of 100.7 °C. The southeastern part of the Siwa Oasis, and some sporadic locations on the eastern section of the oasis were found to have significant geothermal potential; consequently, this location is suitable for future geothermal investigation. The adopted method might be applied to identify significant prospective geothermal energy locations in other regions of Egypt and East Africa.

Keywords: magnetic data, SRTM, depth to basement, remote sensing, GIS, geothermal gradient, heat flow, thermal conductivity

Procedia PDF Downloads 117
219 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System

Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu

Abstract:

Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.

Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model

Procedia PDF Downloads 111
218 Hydrogen Sulfide Releasing Ibuprofen Derivative Can Protect Heart After Ischemia-Reperfusion

Authors: Virag Vass, Ilona Bereczki, Erzsebet Szabo, Nora Debreczeni, Aniko Borbas, Pal Herczegh, Arpad Tosaki

Abstract:

Hydrogen sulfide (H₂S) is a toxic gas, but it is produced by certain tissues in a small quantity. According to earlier studies, ibuprofen and H₂S has a protective effect against damaging heart tissue caused by ischemia-reperfusion. Recently, we have been investigating the effect of a new water-soluble H₂S releasing ibuprofen molecule administered after artificially generated ischemia-reperfusion on isolated rat hearts. The H₂S releasing property of the new ibuprofen derivative was investigated in vitro in medium derived from heart endothelial cell isolation at two concentrations. The ex vivo examinations were carried out on rat hearts. Rats were anesthetized with an intraperitoneal injection of ketamine, xylazine, and heparin. After thoracotomy, hearts were excised and placed into ice-cold perfusion buffer. Perfusion of hearts was conducted in Langendorff mode via the cannulated aorta. In our experiments, we studied the dose-effect of the H₂S releasing molecule in Langendorff-perfused hearts with the application of gradually increasing concentration of the compound (0- 20 µM). The H₂S releasing ibuprofen derivative was applied before the ischemia for 10 minutes. H₂S concentration was measured with an H₂S detecting electrochemical sensor from the coronary effluent solution. The 10 µM concentration was chosen for further experiments when the treatment with this solution was occurred after the ischemia. The release of H₂S is occurred by the hydrolyzing enzymes that are present in the heart endothelial cells. The protective effect of the new H₂S releasing ibuprofen molecule can be confirmed by the infarct sizes of hearts using the Triphenyl-tetrazolium chloride (TTC) staining method. Furthermore, we aimed to define the effect of the H₂S releasing ibuprofen derivative on autophagic and apoptotic processes in damaged hearts after investigating the molecular markers of these events by western blotting and immunohistochemistry techniques. Our further studies will include the examination of LC3I/II, p62, Beclin1, caspase-3, and other apoptotic molecules. We hope that confirming the protective effect of new H₂S releasing ibuprofen molecule will open a new possibility for the development of more effective cardioprotective agents with exerting fewer side effects. Acknowledgment: This study was supported by the grants of NKFIH- K-124719 and the European Union and the State of Hungary co- financed by the European Social Fund in the framework of GINOP- 2.3.2-15-2016-00043.

Keywords: autophagy, hydrogen sulfide, ibuprofen, ischemia, reperfusion

Procedia PDF Downloads 140
217 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.

Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking

Procedia PDF Downloads 402
216 Probabilistic Health Risk Assessment of Polycyclic Aromatic Hydrocarbons in Repeatedly Used Edible Oils and Finger Foods

Authors: Suraj Sam Issaka, Anita Asamoah, Abass Gibrilla, Joseph Richmond Fianko

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are a group of organic compounds that can form in edible oils during repeated frying and accumulate in fried foods. This study assesses the chances of health risks (carcinogenic and non-carcinogenic) due to PAHs levels in popular finger foods (bean cakes, plantain chips, doughnuts) fried in edible oils (mixed vegetable, sunflower, soybean) from the Ghanaian market. Employing probabilistic health risk assessment that considers variability and uncertainty in exposure and risk estimates provides a more realistic representation of potential health risks. Monte Carlo simulations with 10,000 iterations were used to estimate carcinogenic, mutagenic, and non-carcinogenic risks for different age groups (A: 6-10 years, B: 11-20 years, C: 20-70 years), food types (bean cake, plantain chips, doughnut), oil types (soybean, mixed vegetable, sunflower), and re-usage frying oil frequencies (once, twice, thrice). Our results suggest that, for age Group A, doughnuts posed the highest probability of carcinogenic risk (91.55%) exceeding the acceptable threshold, followed by bean cakes (43.87%) and plantain chips (7.72%), as well as the highest probability of unacceptable mutagenic risk (89.2%), followed by bean cakes (40.32%). Among age Group B, doughnuts again had the highest probability of exceeding carcinogenic risk limits (51.16%) and mutagenic risk limits (44.27%). At the same time, plantain chips exhibited the highest maximum carcinogenic risk. For adults age Group C, bean cakes had the highest probability of unacceptable carcinogenic (50.88%) and mutagenic risks (46.44%), though plantain chips showed the highest maximum values for both carcinogenic and mutagenic risks in this age group. Also, on non-carcinogenic risks across different age groups, it was found that age Group A) who consumed doughnuts had a 68.16% probability of a hazard quotient (HQ) greater than 1, suggesting potential cognitive impairment and lower IQ scores due to early PAH exposure. This group also faced risks from consuming plantain chips and bean cake. For age Group B, the consumption of plantain chips was associated with a 36.98% probability of HQ greater than 1, indicating a potential risk of reduced lung function. In age Group C, the consumption of plantain chips was linked to a 35.70% probability of HQ greater than 1, suggesting a potential risk of cardiovascular diseases.

Keywords: PAHs, fried foods, carcinogenic risk, non-carcinogenic risk, Monte Carlo simulations

Procedia PDF Downloads 13
215 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes

Authors: Chih Ming Tseng

Abstract:

Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.

Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation

Procedia PDF Downloads 28
214 De Novo Assembly and Characterization of the Transcriptome from the Fluoroacetate Producing Plant, Dichapetalum Cymosum

Authors: Selisha A. Sooklal, Phelelani Mpangase, Shaun Aron, Karl Rumbold

Abstract:

Organically bound fluorine (C-F bond) is extremely rare in nature. Despite this, the first fluorinated secondary metabolite, fluoroacetate, was isolated from the plant Dichapetalum cymosum (commonly known as Gifblaar). However, the enzyme responsible for fluorination (fluorinase) in Gifblaar was never isolated and very little progress has been achieved in understanding this process in higher plants. Fluorinated compounds have vast applications in the pharmaceutical, agrochemical and fine chemicals industries. Consequently, an enzyme capable of catalysing a C-F bond has great potential as a biocatalyst in the industry considering that the field of fluorination is virtually synthetic. As with any biocatalyst, a range of these enzymes are required. Therefore, it is imperative to expand the exploration for novel fluorinases. This study aimed to gain molecular insights into secondary metabolite biosynthesis in Gifblaar using a high-throughput sequencing-based approach. Mechanical wounding studies were performed using Gifblaar leaf tissue in order to induce expression of the fluorinase. The transcriptome of the wounded and unwounded plant was then sequenced on the Illumina HiSeq platform. A total of 26.4 million short sequence reads were assembled into 77 845 transcripts using Trinity. Overall, 68.6 % of transcripts were annotated with gene identities using public databases (SwissProt, TrEMBL, GO, COG, Pfam, EC) with an E-value threshold of 1E-05. Sequences exhibited the greatest homology to the model plant, Arabidopsis thaliana (27 %). A total of 244 annotated transcripts were found to be differentially expressed between the wounded and unwounded plant. In addition, secondary metabolic pathways present in Gifblaar were successfully reconstructed using Pathway tools. Due to lack of genetic information for plant fluorinases, a transcript failed to be annotated as a fluorinating enzyme. Thus, a local database containing the 5 existing bacterial fluorinases was created. Fifteen transcripts having homology to partial regions of existing fluorinases were found. In efforts to obtain the full coding sequence of the Gifblaar fluorinase, primers were designed targeting the regions of homology and genome walking will be performed to amplify the unknown regions. This is the first genetic data available for Gifblaar. It has provided novel insights into the mechanisms of metabolite biosynthesis and will allow for the discovery of the first eukaryotic fluorinase.

Keywords: biocatalyst, fluorinase, gifblaar, transcriptome

Procedia PDF Downloads 273
213 Monitoring Memories by Using Brain Imaging

Authors: Deniz Erçelen, Özlem Selcuk Bozkurt

Abstract:

The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.

Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons

Procedia PDF Downloads 86
212 Evaluation of Simple, Effective and Affordable Processing Methods to Reduce Phytates in the Legume Seeds Used for Feed Formulations

Authors: N. A. Masevhe, M. Nemukula, S. S. Gololo, K. G. Kgosana

Abstract:

Background and Study Significance: Legume seeds are important in agriculture as they are used for feed formulations due to their nutrient-dense, low-cost, and easy accessibility. Although they are important sources of energy, proteins, carbohydrates, vitamins, and minerals, they contain abundant quantities of anti-nutritive factors that reduce the bioavailability of nutrients, digestibility of proteins, and mineral absorption in livestock. However, the removal of these factors is too costly as it requires expensive state-of-the-art techniques such as high pressure and thermal processing. Basic Methodologies: The aim of the study was to investigate cost-effective methods that can be used to reduce the inherent phytates as putative antinutrients in the legume seeds. The seeds of Arachis hypogaea, Pisum sativum and Vigna radiata L. were subjected to the single processing methods viz raw seeds plus dehulling (R+D), soaking plus dehulling (S+D), ordinary cooking plus dehulling (C+D), infusion plus dehulling (I+D), autoclave plus dehulling (A+D), microwave plus dehulling (M+D) and five combined methods (S+I+D; S+A+D; I+M+D; S+C+D; S+M+D). All the processed seeds were dried, ground into powder, extracted, and analyzed on a microplate reader to determine the percentage of phytates per dry mass of the legume seeds. Phytic acid was used as a positive control, and one-way ANOVA was used to determine the significant differences between the means of the processing methods at a threshold of 0.05. Major Findings: The results of the processing methods showed the percentage yield ranges of 39.1-96%, 67.4-88.8%, and 70.2-93.8% for V. radiata, A. hypogaea and P. sativum, respectively. Though the raw seeds contained the highest contents of phytates that ranged between 0.508 and 0.527%, as expected, the R+D resulted in a slightly lower phytate percentage range of 0.469-0.485%, while other processing methods resulted in phytate contents that were below 0.35%. The M+D and S+M+D methods showed low phytate percentage ranges of 0.276-0.296% and 0.272-0.294%, respectively, where the lowest percentage yield was determined in S+M+D of P. sativum. Furthermore, these results were found to be significantly different (p<0.05). Though phytates cause micronutrient deficits as they chelate important minerals such as calcium, zinc, iron, and magnesium, their reduction may enhance nutrient bioavailability since they cannot be digested by the ruminants. Concluding Statement: Despite the nutritive aspects of the processed legume seeds, which are still in progress, the M+D and S+M+D methods, which significantly reduced the phytates in the investigated legume seeds, may be recommended to the local farmers and feed-producing industries so as to enhance animal health and production at an affordable cost.

Keywords: anti-nutritive factors, extraction, legume seeds, phytate

Procedia PDF Downloads 29
211 A Prediction Method of Pollutants Distribution Pattern: Flare Motion Using Computational Fluid Dynamics (CFD) Fluent Model with Weather Research Forecast Input Model during Transition Season

Authors: Benedictus Asriparusa, Lathifah Al Hakimi, Aulia Husada

Abstract:

A large amount of energy is being wasted by the release of natural gas associated with the oil industry. This release interrupts the environment particularly atmosphere layer condition globally which contributes to global warming impact. This research presents an overview of the methods employed by researchers in PT. Chevron Pacific Indonesia in the Minas area to determine a new prediction method of measuring and reducing gas flaring and its emission. The method emphasizes advanced research which involved analytical studies, numerical studies, modeling, and computer simulations, amongst other techniques. A flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process releases emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the chemical composition of air and environment around the boundary layer mainly during transition season. Transition season in Indonesia is absolutely very difficult condition to predict its pattern caused by the difference of two air mass conditions. This paper research focused on transition season in 2013. A simulation to create the new pattern of the pollutants distribution is needed. This paper has outlines trends in gas flaring modeling and current developments to predict the dominant variables in the pollutants distribution. A Fluent model is used to simulate the distribution of pollutants gas coming out of the stack, whereas WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. Based on the running model, the most influence factor was wind speed. The goal of the simulation is to predict the new pattern based on the time of fastest wind and slowest wind occurs for pollutants distribution. According to the simulation results, it can be seen that the fastest wind (last of March) moves pollutants in a horizontal direction and the slowest wind (middle of May) moves pollutants vertically. Besides, the design of flare stack in compliance according to EPA Oil and Gas Facility Stack Parameters likely shows pollutants concentration remains on the under threshold NAAQS (National Ambient Air Quality Standards).

Keywords: flare motion, new prediction, pollutants distribution, transition season, WRF model

Procedia PDF Downloads 556
210 The Influence of Morphology and Interface Treatment on Organic 6,13-bis (triisopropylsilylethynyl)-Pentacene Field-Effect Transistors

Authors: Daniel Bülz, Franziska Lüttich, Sreetama Banerjee, Georgeta Salvan, Dietrich R. T. Zahn

Abstract:

For the development of electronics, organic semiconductors are of great interest due to their adjustable optical and electrical properties. Especially for spintronic applications they are interesting because of their weak spin scattering, which leads to longer spin life times compared to inorganic semiconductors. It was shown that some organic materials change their resistance if an external magnetic field is applied. Pentacene is one of the materials which exhibit the so called photoinduced magnetoresistance which results in a modulation of photocurrent when varying the external magnetic field. Also the soluble derivate of pentacene, the 6,13-bis (triisopropylsilylethynyl)-pentacene (TIPS-pentacene) exhibits the same negative magnetoresistance. Aiming for simpler fabrication processes, in this work, we compare TIPS-pentacene organic field effect transistors (OFETs) made from solution with those fabricated by thermal evaporation. Because of the different processing, the TIPS-pentacene thin films exhibit different morphologies in terms of crystal size and homogeneity of the substrate coverage. On the other hand, the interface treatment is known to have a high influence on the threshold voltage, eliminating trap states of silicon oxide at the gate electrode and thereby changing the electrical switching response of the transistors. Therefore, we investigate the influence of interface treatment using octadecyltrichlorosilane (OTS) or using a simple cleaning procedure with acetone, ethanol, and deionized water. The transistors consist of a prestructured OFET substrates including gate, source, and drain electrodes, on top of which TIPS-pentacene dissolved in a mixture of tetralin and toluene is deposited by drop-, spray-, and spin-coating. Thereafter we keep the sample for one hour at a temperature of 60 °C. For the transistor fabrication by thermal evaporation the prestructured OFET substrates are also kept at a temperature of 60 °C during deposition with a rate of 0.3 nm/min and at a pressure below 10-6 mbar. The OFETs are characterized by means of optical microscopy in order to determine the overall quality of the sample, i.e. crystal size and coverage of the channel region. The output and transfer characteristics are measured in the dark and under illumination provided by a white light LED in the spectral range from 450 nm to 650 nm with a power density of (8±2) mW/cm2.

Keywords: organic field effect transistors, solution processed, surface treatment, TIPS-pentacene

Procedia PDF Downloads 447
209 Incidence of Fungal Infections and Mycotoxicosis in Pork Meat and Pork By-Products in Egyptian Markets

Authors: Ashraf Samir Hakim, Randa Mohamed Alarousy

Abstract:

The consumption of food contaminated with molds (microscopic filamentous fungi) and their toxic metabolites results in the development of food-borne mycotoxicosis. The spores of molds are ubiquitously spread in the environment and can be detected everywhere. Ochratoxin A is a potentially carcinogenic fungal toxin found in a variety of food commodities , not only is considered the most abundant and hence the most commonly detected member but also is the most toxic one.Ochratoxin A is the most abundant and hence the most commonly detected member, but is also the most toxic of the three. A very limited research works concerning foods of porcine origin in Egypt were obtained in spite of presence a considerable swine population and consumers. In this study, the quality of various ready-to-eat local and imported pork meat and meat byproducts sold in Egyptian markets as well as edible organs as liver and kidney were assessed for the presence of various molds and their toxins as a raw material. Mycological analysis was conducted on (n=110) samples which included pig livers n=10 and kidneys n=10 from the Basateen slaughter house; local n=70 and 20 imported processed pork meat byproducts.The isolates were identified using traditional mycological and biochemical tests while, Ochratoxin A levels were quantitatively analyzed using the high performance liquid. Results of conventional mycological tests for detecting the presence of fungal growth (yeasts or molds) were negative, while the results of mycotoxins concentrations were be greatly above the permiceable limits or "tolerable weekly intake" (TWI) of ochratoxin A established by EFSA in 2006 in local pork and pork byproducts while the imported samples showed a very slightly increasing.Since ochratoxin A is stable and generally resistant to heat and processing, control of ochratoxin A contamination lies in the control of the growth of the toxin-producing fungi. Effective prevention of ochratoxin A contamination therefore depends on good farming and agricultural practices. Good Agricultural Practices (GAP) including methods to reduce fungal infection and growth during harvest, storage, transport and processing provide the primary line of defense against contamination with ochratoxin A. To the best of our knowledge this is the first report of mycological assessment, especially the mycotoxins in pork byproducts in Egypt.

Keywords: Egyptian markets, mycotoxicosis, ochratoxin A, pork meat, pork by-products

Procedia PDF Downloads 466
208 Research on Tight Sandstone Oil Accumulation Process of the Third Member of Shahejie Formation in Dongpu Depression, China

Authors: Hui Li, Xiongqi Pang

Abstract:

In recent years, tight oil has become a hot spot for unconventional oil and gas exploration and development in the world. Dongpu Depression is a typical hydrocarbon-rich basin in the southwest of Bohai Bay Basin, in which tight sandstone oil and gas have been discovered in deep reservoirs, most of which are buried more than 3500m. The distribution and development characteristics of deep tight sandstone reservoirs need to be studied. The main source rocks in study area are dark mudstone and shale of the middle and lower third sub-member of Shahejie Formation. Total Organic Carbon (TOC) content of source rock is between 0.08-11.54%, generally higher than 0.6% and the value of S1+S2 is between 0.04–72.93 mg/g, generally higher than 2 mg/g. It can be evaluated as middle to fine level overall. The kerogen type of organic matter is predominantly typeⅡ1 andⅡ2. Vitrinite reflectance (Ro) is mostly greater than 0.6% indicating that the source rock entered the hydrocarbon generation threshold. The physical property of reservoir was poor, the most reservoir has a porosity lower than 12% and a permeability of less than 1×10⁻³μm. The rocks in this area showed great heterogeneity, some areas developed desserts with high porosity and permeability. According to SEM, thin section image, inclusion test and so on, the reservoir was affected by compaction and cementation during early diagenesis stage (44-31Ma). The diagenesis caused the tight reservoir in Huzhuangji, Pucheng, Weicheng Area while the porosity in Machang, Qiaokou, Wenliu Area was still over 12%. In the process of middle diagenesis phase stage A (31-17Ma), the reservoir porosity in Machang, Pucheng, Huzhuangji Area increased due to dissolution; after that the oil generation window of source rock was achieved for the first phase hydrocarbon charging (31-23Ma), formed the conventional oil deposition in Machang, Qiaokou, Wenliu, Huzhuangji Area and unconventional tight reservoir in Pucheng, Weicheng Area. Then came to stage B of middle diagenesis phase (17-7Ma), in this stage, the porosity of reservoir continued to decrease after the dissolution and led to a situation that the reservoirs were generally compacted. And since then, the second hydrocarbon filling has been processing since 7Ma. Most of the pools charged and formed in this procedure are tight sandstone oil reservoir. In conclusion, tight sandstone oil was formed in two patterns in Dongpu Depression, which could be concluded as ‘density fist then accumulation’ pattern and ‘accumulation fist next density’ pattern.

Keywords: accumulation process, diagenesis, dongpu depression, tight sandstone oil

Procedia PDF Downloads 118
207 Detection of Aflatoxin B1 Producing Aspergillus flavus Genes from Maize Feed Using Loop-Mediated Isothermal Amplification (LAMP) Technique

Authors: Sontana Mimapan, Phattarawadee Wattanasuntorn, Phanom Saijit

Abstract:

Aflatoxin contamination in maize, one of several agriculture crops grown for livestock feeding, is still a problem throughout the world mainly under hot and humid weather conditions like Thailand. In this study Aspergillus flavus (A. Flavus), the key fungus for aflatoxin production especially aflatoxin B1 (AFB1), isolated from naturally infected maize were identified and characterized according to colony morphology and PCR using ITS, Beta-tubulin and calmodulin genes. The strains were analysed for the presence of four aflatoxigenic biosynthesis genes in relation to their capability to produce AFB1, Ver1, Omt1, Nor1, and aflR. Aflatoxin production was then confirmed using immunoaffinity column technique. A loop-mediated isothermal amplification (LAMP) was applied as an innovative technique for rapid detection of target nucleic acid. The reaction condition was optimized at 65C for 60 min. and calcein flurescent reagent was added before amplification. The LAMP results showed clear differences between positive and negative reactions in end point analysis under daylight and UV light by the naked eye. In daylight, the samples with AFB1 producing A. Flavus genes developed a yellow to green color, but those without the genes retained the orange color. When excited with UV light, the positive samples become visible by bright green fluorescence. LAMP reactions were positive after addition of purified target DNA until dilutions of 10⁻⁶. The reaction products were then confirmed and visualized with 1% agarose gel electrophoresis. In this regards, 50 maize samples were collected from dairy farms and tested for the presence of four aflatoxigenic biosynthesis genes using LAMP technique. The results were positive in 18 samples (36%) but negative in 32 samples (64%). All of the samples were rechecked by PCR and the results were the same as LAMP, indicating 100% specificity. Additionally, when compared with the immunoaffinity column-based aflatoxin analysis, there was a significant correlation between LAMP results and aflatoxin analysis (r= 0.83, P < 0.05) which suggested that positive maize samples were likely to be a high- risk feed. In conclusion, the LAMP developed in this study can provide a simple and rapid approach for detecting AFB1 producing A. Flavus genes from maize and appeared to be a promising tool for the prediction of potential aflatoxigenic risk in livestock feedings.

Keywords: Aflatoxin B1, Aspergillus flavus genes, maize, loop-mediated isothermal amplification

Procedia PDF Downloads 240
206 A Review on Silicon Based Induced Resistance in Plants against Insect Pests

Authors: Asim Abbasi, Muhammad Sufyan, Muhammad Kamran, Iqra

Abstract:

Development of resistance in insect pests against various groups of insecticides has prompted the use of alternative integrated pest management approaches. Among these induced host plant resistance represents an important strategy as it offers a practical, cheap and long lasting solution to keep pests populations below economic threshold level (ETL). Silicon (Si) has a major role in regulating plant eco-relationship by providing strength to the plant in the form of anti-stress mechanism which was utilized in coping with the environmental extremes to get a better yield and quality end produce. Among biotic stresses, insect herbivore signifies one class against which Si provide defense. Silicon in its neutral form (H₄SiO₄) is absorbed by the plants via roots through an active process accompanied by the help of different transporters which were located in the plasma membrane of root cells or by a passive process mostly regulated by transpiration stream, which occurs via the xylem cells along with the water. Plants tissues mainly the epidermal cell walls are the sinks of absorbed silicon where it polymerizes in the form of amorphous silica or monosilicic acid. The noteworthy function of this absorbed silicon is to provide structural rigidity to the tissues and strength to the cell walls. Silicon has both direct and indirect effects on insect herbivores. Increased abrasiveness and hardness of epidermal plant tissues and reduced digestibility as a result of deposition of Si primarily as phytoliths within cuticle layer is now the most authenticated mechanisms of Si in enhancing plant resistance to insect herbivores. Moreover, increased Si content in the diet also impedes the efficiency by which insects transformed consumed food into the body mass. The palatability of food material has also been changed by Si application, and it also deters herbivore feeding for food. The production of defensive compounds of plants like silica and phenols have also been amplified by the exogenous application of silicon sources which results in reduction of the probing time of certain insects. Some studies also highlighted the role of silicon at the third trophic level as it also attracts natural enemies of insects attacking the crop. Hence, the inclusion of Si in pest management approaches can be a healthy and eco-friendly tool in future.

Keywords: defensive, phytoliths, resistance, stresses

Procedia PDF Downloads 189
205 MB-Slam: A Slam Framework for Construction Monitoring

Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han

Abstract:

Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.

Keywords: perspective alignment, progress monitoring, slam, stereo matching.

Procedia PDF Downloads 225
204 Measurements for Risk Analysis and Detecting Hazards by Active Wearables

Authors: Werner Grommes

Abstract:

Intelligent wearables (illuminated vests or hand and foot-bands, smart watches with a laser diode, Bluetooth smart glasses) overflow the market today. They are integrated with complex electronics and are worn very close to the body. Optical measurements and limitation of the maximum light density are needed. Smart watches are equipped with a laser diode or control different body currents. Special glasses generate readable text information that is received via radio transmission. Small high-performance batteries (lithium-ion/polymer) supply the electronics. All these products have been tested and evaluated for risk. These products must, for example, meet the requirements for electromagnetic compatibility as well as the requirements for electromagnetic fields affecting humans or implant wearers. Extensive analyses and measurements were carried out for this purpose. Many users are not aware of these risks. The result of this study should serve as a suggestion to do it better in the future or simply to point out these risks. Commercial LED warning vests, LED hand and foot-bands, illuminated surfaces with inverter (high voltage), flashlights, smart watches, and Bluetooth smart glasses were checked for risks. The luminance, the electromagnetic emissions in the low-frequency as well as in the high-frequency range, audible noises, and nervous flashing frequencies were checked by measurements and analyzed. Rechargeable lithium-ion or lithium-polymer batteries can burn or explode under special conditions like overheating, overcharging, deep discharge or using out of the temperature specification. Some risk analysis becomes necessary. The result of this study is that many smart wearables are worn very close to the body, and an extensive risk analysis becomes necessary. Wearers of active implants like a pacemaker or implantable cardiac defibrillator must be considered. If the wearable electronics include switching regulators or inverter circuits, active medical implants in the near field can be disturbed. A risk analysis is necessary.

Keywords: safety and hazards, electrical safety, EMC, EMF, active medical implants, optical radiation, illuminated warning vest, electric luminescent, hand and head lamps, LED, e-light, safety batteries, light density, optical glare effects

Procedia PDF Downloads 110
203 Life Time Improvement of Clamp Structural by Using Fatigue Analysis

Authors: Pisut Boonkaew, Jatuporn Thongsri

Abstract:

In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.

Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability

Procedia PDF Downloads 235
202 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours

Authors: Fikret Yalcinkaya, Hamza Unsal

Abstract:

To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.

Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models

Procedia PDF Downloads 181
201 The Interaction of Climate Change and Human Health in Italy

Authors: Vito Telesca, Giuseppina A. Giorgio, M. Ragosta

Abstract:

The effects of extreme heat events are increasing in recent years. Humans are forced to adjust themselves to adverse climatic conditions. The impact of weather on human health has become public health significance, especially in light of climate change and rising frequency of devasting weather events (e.g., heat waves and floods). The interest of scientific community is widely known. In particular, the associations between temperature and mortality are well studied. Weather conditions are natural factors that affect the human organism. Recent works show that the temperature threshold at which an impact is seen varies by geographic area and season. These results suggest heat warning criteria should consider local thresholds to account for acclimation to local climatology as well as the seasonal timing of a forecasted heat wave. Therefore, it is very important the problem called ‘local warming’. This is preventable with adequate warning tools and effective emergency planning. Since climate change has the potential to increase the frequency of these types of events, improved heat warning systems are urgently needed. This would require a better knowledge of the full impact of extreme heat on morbidity and mortality. The majority of researchers who analyze the associations between human health and weather variables, investigate the effect of air temperature and bioclimatic indices. These indices combine air temperature, relative humidity, and wind speed and are very important to determine the human thermal comfort. Health impact studies of weather events showed that the prevention is an essential element to dramatically reduce the impact of heat waves. The summer Italian of 2012 was characterized with high average temperatures (con un +2.3°C in reference to the period 1971-2000), enough to be considered as the second hottest summer since 1800. Italy was the first among countries in Europe which adopted tools for to predict these phenomena with 72 hours in advance (Heat Health Watch Warning System - HHWWS). Furthermore, in Italy heat alert criteria relies on the different Indexes, for example Apparent temperature, Scharlau index, Thermohygrometric Index, etc. This study examines the importance of developing public health policies that protect the most vulnerable people (such as the elderly) to extreme temperatures, highlighting the factors that confer susceptibility.

Keywords: heat waves, Italy, local warming, temperature

Procedia PDF Downloads 243
200 Detecting Elderly Abuse in US Nursing Homes Using Machine Learning and Text Analytics

Authors: Minh Huynh, Aaron Heuser, Luke Patterson, Chris Zhang, Mason Miller, Daniel Wang, Sandeep Shetty, Mike Trinh, Abigail Miller, Adaeze Enekwechi, Tenille Daniels, Lu Huynh

Abstract:

Machine learning and text analytics have been used to analyze child abuse, cyberbullying, domestic abuse and domestic violence, and hate speech. However, to the authors’ knowledge, no research to date has used these methods to study elder abuse in nursing homes or skilled nursing facilities from field inspection reports. We used machine learning and text analytics methods to analyze 356,000 inspection reports, which have been extracted from CMS Form-2567 field inspections of US nursing homes and skilled nursing facilities between 2016 and 2021. Our algorithm detected occurrences of the various types of abuse, including physical abuse, psychological abuse, verbal abuse, sexual abuse, and passive and active neglect. For example, to detect physical abuse, our algorithms search for combinations or phrases and words suggesting willful infliction of damage (hitting, pinching or burning, tethering, tying), or consciously ignoring an emergency. To detect occurrences of elder neglect, our algorithm looks for combinations or phrases and words suggesting both passive neglect (neglecting vital needs, allowing malnutrition and dehydration, allowing decubiti, deprivation of information, limitation of freedom, negligence toward safety precautions) and active neglect (intimidation and name-calling, tying the victim up to prevent falls without consent, consciously ignoring an emergency, not calling a physician in spite of indication, stopping important treatments, failure to provide essential care, deprivation of nourishment, leaving a person alone for an inappropriate amount of time, excessive demands in a situation of care). We further compare the prevalence of abuse before and after Covid-19 related restrictions on nursing home visits. We also identified the facilities with the most number of cases of abuse with no abuse facilities within a 25-mile radius as most likely candidates for additional inspections. We also built an interactive display to visualize the location of these facilities.

Keywords: machine learning, text analytics, elder abuse, elder neglect, nursing home abuse

Procedia PDF Downloads 146
199 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 212
198 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 86
197 The Role of the Child's Previous Inventory in Verb Overgeneralization in Spanish Child Language: A Case Study

Authors: Mary Rosa Espinosa-Ochoa

Abstract:

The study of overgeneralization in inflectional morphology provides evidence for understanding how a child's mind works when applying linguistic patterns in a novel way. High-frequency inflectional forms in the input cause inappropriate use in contexts related to lower-frequency forms. Children learn verbs as lexical items and new forms develop only gradually, around their second year: most of the utterances that children produce are closely related to what they have previously produced. Spanish has a complex verbal system that inflects for person, mood, and tense. Approximately 200 verbs are irregular, and bare roots always require an inflected form, which represents a challenge for the memory. The aim of this research is to investigate i) what kinds of overgeneralization errors children make in verb production, ii) to what extent these errors are related to verb forms previously produced, and iii) whether the overgeneralized verb components are also frequent in children’s linguistic inventory. It consists of a high-density longitudinal study of a middle-class girl (1;11,24-2;02,24) from Mexico City, whose utterances were recorded almost daily for three months to compile a unique corpus in the Spanish language. Of the 358 types of inflected verbs produced by the child, 9.11% are overgeneralizations. Not only are inflected forms (verbal and pronominal clitics) overgeneralized, but also verbal roots. Each of the forms can be traced to previous utterances, and they show that the child is detecting morphological patterns. Neither verbal roots nor inflected forms are associated with high frequency patterns in her own speech. For example, the child alternates the bare roots of an irregular verb, cáye-te* and cáiga-te* (“fall down”), to express the imperative of the verb cá-e-te (fall down.IMPERATIVE-PRONOMINAL.CLITIC), although cay-ó (PAST.PERF.3SG) is the most frequent form of her previous complete inventory, and the combined frequency of caer (INF), cae (PRES.INDICATIVE.3SG), and caes (PRES.INDICATIVE.2SG) is the same as that of as caiga (PRES.SUBJ.1SG and 3SG). These results provide evidence that a) two forms of the same verb compete in the child’s memory, and b) although the child uses her own inventory to create new forms, these forms are not necessarily frequent in her memory storage, which means that her mind is more sensitive to external stimuli. Language acquisition is a developing process, given the sensitivity of the human mind to linguistic interaction with the outside world.

Keywords: inflection, morphology, child language acquisition, Spanish

Procedia PDF Downloads 101
196 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 174
195 Transition from Linear to Circular Business Models with Service Design Methodology

Authors: Minna-Maari Harmaala, Hanna Harilainen

Abstract:

Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.

Keywords: business model innovation, circular economy, circular economy business models, service design

Procedia PDF Downloads 135
194 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 170