Search results for: bubble points
2145 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 3402144 Aquatic Sediment and Honey of Apis mellifera as Bioindicators of Pesticide Residues
Authors: Luana Guerra, Silvio C. Sampaio, Vladimir Pavan Margarido, Ralpho R. Reis
Abstract:
Brazil is the world's largest consumer of pesticides. The excessive use of these compounds has negative impacts on animal and human life, the environment, and food security. Bees, crucial for pollination, are exposed to pesticides during the collection of nectar and pollen, posing risks to their health and the food chain, including honey contamination. Aquatic sediments are also affected, impacting water quality and the microbiota. Therefore, the analysis of aquatic sediments and bee honey is essential to identify environmental contamination and monitor ecosystems. The aim of this study was to use samples of honey from honeybees (Apis mellifera) and aquatic sediment as bioindicators of environmental contamination by pesticides and their relationship with agricultural use in the surrounding areas. The sample collections of sediment and honey were carried out in two stages. The first stage was conducted in the Bituruna municipality region in the second half of the year 2022, and the second stage took place in the regions of Laranjeiras do Sul, Quedas do Iguaçu, and Nova Laranjeiras in the first half of the year 2023. In total, 10 collection points were selected, with 5 points in the first stage and 5 points in the second stage, where one sediment sample and one honey sample were collected for each point, totaling 20 samples. The honey and sediment samples were analyzed at the Laboratory of the Paraná Institute of Technology, with ten samples of honey and ten samples of sediment. The selected extraction method was QuEChERS, and the analysis of the components present in the sample was performed using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). The pesticides Azoxystrobin, Epoxiconazole, Boscalid, Carbendazim, Haloxifope, Fomesafen, Fipronil, Chlorantraniliprole, Imidacloprid, and Bifenthrin were detected in the sediment samples from the study area in Laranjeiras do Sul, Paraná, with Carbendazim being the compound with the highest concentration (0.47 mg/kg). The honey samples obtained from the apiaries showed satisfactory results, as they did not show any detection or quantification of the analyzed pesticides, except for Point 9, which had the fungicide tebuconazole but with a concentration2143 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 332142 Exergy Analysis of a Vapor Absorption Refrigeration System Using Carbon Dioxide as Refrigerant
Authors: Samsher Gautam, Apoorva Roy, Bhuvan Aggarwal
Abstract:
Vapor absorption refrigeration systems can replace vapor compression systems in many applications as they can operate on a low-grade heat source and are environment-friendly. Widely used refrigerants such as CFCs and HFCs cause significant global warming. Natural refrigerants can be an alternative to them, among which carbon dioxide is promising for use in automotive air conditioning systems. Its inherent safety, ability to withstand high pressure and high heat transfer coefficient coupled with easy availability make it a likely choice for refrigerant. Various properties of the ionic liquid [bmim][PF₆], such as non-toxicity, stability over a wide temperature range and ability to dissolve gases like carbon dioxide, make it a suitable absorbent for a vapor absorption refrigeration system. In this paper, an absorption chiller consisting of a generator, condenser, evaporator and absorber was studied at an operating temperature of 70⁰C. A thermodynamic model was set up using the Peng-Robinson equations of state to predict the behavior of the refrigerant and absorbent pair at different points in the system. A MATLAB code was used to obtain the values of enthalpy and entropy at selected points in the system. The exergy destruction in each component and exergetic coefficient of performance (ECOP) of the system were calculated by performing an exergy analysis based on the second law of thermodynamics. Graphs were plotted between varying operating conditions and the ECOP obtained in each case. The effect of every component on the ECOP was examined. The exergetic coefficient of performance was found to be lesser than the coefficient of performance based on the first law of thermodynamics.Keywords: [bmim][PF₆] as absorbent, carbon dioxide as refrigerant, exergy analysis, Peng-Robinson equations of state, vapor absorption refrigeration
Procedia PDF Downloads 2892141 A Combined CFD Simulation of Plateau Borders including Films and Transitional Areas of Liquid Foams
Authors: Abdolhamid Anazadehsayed, Jamal Naser
Abstract:
An integrated computational fluid dynamics model is developed for a combined simulation of Plateau borders, films, and transitional areas between the film and the Plateau borders to reduce the simplifications and shortcomings of available models for foam drainage in micro-scale. Additionally, the counter-flow related to the Marangoni effect in the transitional area is investigated. The results of this combined model show the contribution of the films, the exterior Plateau borders, and Marangoni flow in the drainage process more accurately since the inter-influence of foam's elements is included in this study. The exterior Plateau borders flow rate can be four times larger than the interior ones. The exterior bubbles can be more prominent in the drainage process in cases where the number of the exterior Plateau borders increases due to the geometry of container. The ratio of the Marangoni counter-flow to the Plateau border flow increases drastically with an increase in the mobility of air-liquid interface. However, the exterior bubbles follow the same trend with much less intensity since typically, the flow is less dependent on the interface of air-liquid in the exterior bubbles. Moreover, the Marangoni counter-flow in a near-wall transition area is less important than an internal one. The influence of air-liquid interface mobility on the average velocity of interior foams is attained with more accuracy with more realistic boundary condition. Then it has been compared with other numerical and analytical results. The contribution of films in the drainage is significant for the mobile foams as the velocity of flow in the film has the same order of magnitude as the velocity in the Plateau border. Nevertheless, for foams with rigid interfaces, film's contribution in foam drainage is insignificant, particularly for the films near the wall of the container.Keywords: foam, plateau border, film, Marangoni, CFD, bubble
Procedia PDF Downloads 3452140 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid
Authors: Min Wang, Sergey Utev
Abstract:
The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial
Procedia PDF Downloads 1392139 Charter versus District Schools and Student Achievement: Implications for School Leaders
Authors: Kara Rosenblatt, Kevin Badgett, James Eldridge
Abstract:
There is a preponderance of information regarding the overall effectiveness of charter schools and their ability to increase academic achievement compared to traditional district schools. Most research on the topic is focused on comparing long and short-term outcomes, academic achievement in mathematics and reading, and locale (i.e., urban, v. Rural). While the lingering unanswered questions regarding effectiveness continue to loom for school leaders, data on charter schools suggests that enrollment increases by 10% annually and that charter schools educate more than 2 million U.S. students across 40 states each year. Given the increasing share of U.S. students educated in charter schools, it is important to better understand possible differences in student achievement defined in multiple ways for students in charter schools and for those in Independent School District (ISD) settings in the state of Texas. Data were retrieved from the Texas Education Agency’s (TEA) repository that includes data organized annually and available on the TEA website. Specific data points and definitions of achievement were based on characterizations of achievement found in the relevant literature. Specific data points include but were not limited to graduation rate, student performance on standardized testing, and teacher-related factors such as experience and longevity in the district. Initial findings indicate some similarities with the current literature on long-term student achievement in English/Language Arts; however, the findings differ substantially from other recent research related to long-term student achievement in social studies. There are a number of interesting findings also related to differences between achievement for students in charters and ISDs and within different types of charter schools in Texas. In addition to findings, implications for leadership in different settings will be explored.Keywords: charter schools, ISDs, student achievement, implications for PK-12 school leadership
Procedia PDF Downloads 1292138 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe
Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira
Abstract:
Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust
Procedia PDF Downloads 2672137 Uncanny Orania: White Complicity as the Abject of the Discursive Construction of Racism
Authors: Daphne Fietz
Abstract:
This paper builds on a reflection on an autobiographical experience of uncanniness during fieldwork in the white Afrikaner settlement Orania in South Africa. Drawing on Kristeva’s theory of abjection to establish a theory of Whiteness which is based on boundary threats, it is argued that the uncanny experience as the emergence of the abject points to a moment of crisis of the author’s Whiteness. The emanating abject directs the author to her closeness or convergence with Orania's inhabitants, that is a reciprocity based on mutual Whiteness. The experienced confluence appeals to the author’s White complicity to racism. With recourse to Butler’s theory of subjectivation, the abject, White complicity, inhabits both the outside of a discourse on racism, and of the 'self', as 'I' establish myself in relation to discourse. In this view, the qualities of the experienced abject are linked to the abject of discourse on racism, or, in other words, its frames of intelligibility. It then becomes clear, that discourse on (overt) racism functions as a necessary counter-image through which White morality is established instead of questioned, because here, by White reasoning, the abject of complicity to racism is successfully repressed, curbed, as completely impossible in the binary construction. Hence, such discourse endangers a preservation of racism in its pre-discursive and structural forms as long as its critique does not encompass its own location and performance in discourse. Discourse on overt racism is indispensable to White ignorance as it covers underlying racism and pre-empts further critique. This understanding directs us towards a form of critique which does necessitate self-reflection, uncertainty, and vigilance, which will be referred to as a discourse of relationality. Such a discourse diverges from the presumption of a detached author as a point of reference, and instead departs from attachment, dependence, mutuality and embraces the visceral as a resource of knowledge of relationality. A discourse of relationality points to another possibility of White engagement with Whiteness and racism and further promotes a conception of responsibility, which allows for and highlights dispossession and relationality in contrast to single agency and guilt.Keywords: abjection, discourse, relationality, the visceral, whiteness
Procedia PDF Downloads 1582136 Structural Design for Effective Load Balancing of the Iron Frame in Manhole Lid
Authors: Byung Il You, Ryun Oh, Gyo Woo Lee
Abstract:
Manhole refers to facilities that are accessible to the people cleaning and inspection of sewer, and its covering is called manhole lid. Manhole lid is typically made of a cast iron material. Due to the heavy weight of the cast iron manhole lids their installation and maintenance are not easy, and an electrical shock and corrosion aging of them can cause critical problems. The manhole body and the lid manufacturing using the fiber-reinforced composite material can reduce the weight considerably compared to the cast iron manhole. But only the fiber reinforcing is hard to maintain the heavy load, and the method of the iron frame with double injection molding of the composite material has been proposed widely. In this study reflecting the situation of this market, the structural design of the iron frame for the composite manhole lid was carried out. Structural analysis with the computer simulation for the effectively distributed load on the iron frame was conducted. In addition, we want to assess manufacturing costs through the comparing of weights and number of welding spots of the frames. Despite the cross-sectional area is up to 38% compared with the basic solid form the maximum von Mises stress is increased at least about 7 times locally near the rim and the maximum strain in the central part of the lid is about 5.5 times. The number of welding points related to the manufacturing cost was increased gradually with the more complicated shape. Also, the higher the height of the arch in the center of the lid the better result might be obtained. But considering the economic aspect of the composite fabrication we determined the same thickness as the frame for the height of the arch at the center of the lid. Additionally in consideration of the number of the welding points we selected the hexagonal as the optimal shape. Acknowledgment: These are results of a study on the 'Leaders Industry-university Cooperation' Project, supported by the Ministry of Education (MOE).Keywords: manhole lid, iron frame, structural design, computer simulation
Procedia PDF Downloads 2752135 A 1H NMR-Linked PCR Modelling Strategy for Tracking the Fatty Acid Sources of Aldehydic Lipid Oxidation Products in Culinary Oils Exposed to Simulated Shallow-Frying Episodes
Authors: Martin Grootveld, Benita Percival, Sarah Moumtaz, Kerry L. Grootveld
Abstract:
Objectives/Hypotheses: The adverse health effect potential of dietary lipid oxidation products (LOPs) has evoked much clinical interest. Therefore, we employed a 1H NMR-linked Principal Component Regression (PCR) chemometrics modelling strategy to explore relationships between data matrices comprising (1) aldehydic LOP concentrations generated in culinary oils/fats when exposed to laboratory-simulated shallow frying practices, and (2) the prior saturated (SFA), monounsaturated (MUFA) and polyunsaturated fatty acid (PUFA) contents of such frying media (FM), together with their heating time-points at a standard frying temperature (180 oC). Methods: Corn, sunflower, extra virgin olive, rapeseed, linseed, canola, coconut and MUFA-rich algae frying oils, together with butter and lard, were heated according to laboratory-simulated shallow-frying episodes at 180 oC, and FM samples were collected at time-points of 0, 5, 10, 20, 30, 60, and 90 min. (n = 6 replicates per sample). Aldehydes were determined by 1H NMR analysis (Bruker AV 400 MHz spectrometer). The first (dependent output variable) PCR data matrix comprised aldehyde concentration scores vectors (PC1* and PC2*), whilst the second (predictor) one incorporated those from the fatty acid content/heating time variables (PC1-PC4) and their first-order interactions. Results: Structurally complex trans,trans- and cis,trans-alka-2,4-dienals, 4,5-epxy-trans-2-alkenals and 4-hydroxy-/4-hydroperoxy-trans-2-alkenals (group I aldehydes predominantly arising from PUFA peroxidation) strongly and positively loaded on PC1*, whereas n-alkanals and trans-2-alkenals (group II aldehydes derived from both MUFA and PUFA hydroperoxides) strongly and positively loaded on PC2*. PCR analysis of these scores vectors (SVs) demonstrated that PCs 1 (positively-loaded linoleoylglycerols and [linoleoylglycerol]:[SFA] content ratio), 2 (positively-loaded oleoylglycerols and negatively-loaded SFAs), 3 (positively-loaded linolenoylglycerols and [PUFA]:[SFA] content ratios), and 4 (exclusively orthogonal sampling time-points) all powerfully contributed to aldehydic PC1* SVs (p 10-3 to < 10-9), as did all PC1-3 x PC4 interaction ones (p 10-5 to < 10-9). PC2* was also markedly dependent on all the above PC SVs (PC2 > PC1 and PC3), and the interactions of PC1 and PC2 with PC4 (p < 10-9 in each case), but not the PC3 x PC4 contribution. Conclusions: NMR-linked PCR analysis is a valuable strategy for (1) modelling the generation of aldehydic LOPs in heated cooking oils and other FM, and (2) tracking their unsaturated fatty acid (UFA) triacylglycerol sources therein.Keywords: frying oils, lipid oxidation products, frying episodes, chemometrics, principal component regression, NMR Analysis, cytotoxic/genotoxic aldehydes
Procedia PDF Downloads 1722134 Hydro Geochemistry and Water Quality in a River Affected by Lead Mining in Southern Spain
Authors: Rosendo Mendoza, María Carmen Hidalgo, María José Campos-Suñol, Julián Martínez, Javier Rey
Abstract:
The impact of mining environmental liabilities and mine drainage on surface water quality has been investigated in the hydrographic basin of the La Carolina mining district (southern Spain). This abandoned mining district is characterized by the existence of important mineralizations of sulfoantimonides of Pb - Ag, and sulfides of Cu - Fe. All surface waters reach the main river of this mining area, the Grande River, which ends its course in the Rumblar reservoir. This waterbody is intended to supply 89,000 inhabitants, as well as irrigation and livestock. Therefore, the analysis and control of the metal(loid) concentration that exists in these surface waters is an important issue because of the potential pollution derived from metallic mining. A hydrogeochemical campaign consisting of 20 water sampling points was carried out in the hydrographic network of the Grande River, as well as two sampling points in the Rumbler reservoir and at the main tailings impoundment draining to the river. Although acid mine drainage (pH below 4) is discharged into the Grande river from some mine adits, the pH values in the river water are always neutral or slightly alkaline. This is mainly the result of a dilution process of the small volumes of mine waters by net alkaline waters of the river. However, during the dry season, the surface waters present high mineralization due to a constant discharge from the abandoned flooded mines and a decrease in the contribution of surface runoff. The concentrations of dissolved Cd and Pb in the water reach values of 2 and 81 µg/l, respectively, exceeding the limit established by the Environmental Quality Standard for surface water. In addition, the concentrations of dissolved As, Cu, and Pb in the waters of the Rumblar reservoir reached values of 10, 20, and 11 µg/l, respectively. These values are higher than the maximum allowable concentration for human consumption, a circumstance that is especially alarming.Keywords: environmental quality, hydrogeochemistry, metal mining, surface water
Procedia PDF Downloads 1442133 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification
Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti
Abstract:
Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.Keywords: fluvial auto-classification concept, mapping, geomorphology, river
Procedia PDF Downloads 3672132 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data
Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour
Abstract:
Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.Keywords: physical activity, machine learning, under 5s, disability, accelerometer
Procedia PDF Downloads 2122131 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel
Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi
Abstract:
The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point
Procedia PDF Downloads 1092130 Building an Arithmetic Model to Assess Visual Consistency in Townscape
Authors: Dheyaa Hussein, Peter Armstrong
Abstract:
The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.Keywords: townscape, urban design, visual assessment, visual consistency
Procedia PDF Downloads 3142129 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India
Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel
Abstract:
Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM
Procedia PDF Downloads 2482128 Practical Experiences in the Development of a Lab-Scale Process for the Production and Recovery of Fucoxanthin
Authors: Alma Gómez-Loredo, José González-Valdez, Jorge Benavides, Marco Rito-Palomares
Abstract:
Fucoxanthin is a carotenoid that exerts multiple beneficial effects on human health, including antioxidant, anti-cancer, antidiabetic and anti-obesity activity; making the development of a whole process for its production and recovery an important contribution. In this work, the lab-scale production and purification of fucoxanthin in Isocrhysis galbana have been studied. In batch cultures, low light intensities (13.5 μmol/m2s) and bubble agitation were the best conditions for production of the carotenoid with product yields of up to 0.143 mg/g. After fucoxanthin ethanolic extraction from biomass and hexane partition, further recovery and purification of the carotenoid has been accomplished by means of alcohol – salt Aqueous Two-Phase System (ATPS) extraction followed by an ultrafiltration (UF) step. An ATPS comprised of ethanol and potassium phosphate (Volume Ratio (VR) =3; Tie-line Length (TLL) 60% w/w) presented a fucoxanthin recovery yield of 76.24 ± 1.60% among the studied systems and was able to remove 64.89 ± 2.64% of the carotenoid and chlorophyll pollutants. For UF, the addition of ethanol to the original recovered ethanolic ATPS stream to a final relation of 74.15% (w/w) resulted in a reduction of approximately 16% of the protein contents, increasing product purity with a recovery yield of about 63% of the compound in the permeate stream. Considering the production, extraction and primary recovery (ATPS and UF) steps, around a 45% global fucoxanthin recovery should be expected. Although other purification technologies, such as Centrifugal Partition Chromatography are able to obtain fucoxanthin recoveries of up to 83%, the process developed in the present work does not require large volumes of solvents or expensive equipment. Moreover, it has a potential for scale up to commercial scale and represents a cost-effective strategy when compared to traditional separation techniques like chromatography.Keywords: aqueous two-phase systems, fucoxanthin, Isochrysis galbana, microalgae, ultrafiltration
Procedia PDF Downloads 4242127 An Efficient Robot Navigation Model in a Multi-Target Domain amidst Static and Dynamic Obstacles
Authors: Michael Ayomoh, Adriaan Roux, Oyindamola Omotuyi
Abstract:
This paper presents an efficient robot navigation model in a multi-target domain amidst static and dynamic workspace obstacles. The problem is that of developing an optimal algorithm to minimize the total travel time of a robot as it visits all target points within its task domain amidst unknown workspace obstacles and finally return to its initial position. In solving this problem, a classical algorithm was first developed to compute the optimal number of paths to be travelled by the robot amidst the network of paths. The principle of shortest distance between robot and targets was used to compute the target point visitation order amidst workspace obstacles. Algorithm premised on the standard polar coordinate system was developed to determine the length of obstacles encountered by the robot hence giving room for a geometrical estimation of the total surface area occupied by the obstacle especially when classified as a relevant obstacle i.e. obstacle that lies in between a robot and its potential visitation point. A stochastic model was developed and used to estimate the likelihood of a dynamic obstacle bumping into the robot’s navigation path and finally, the navigation/obstacle avoidance algorithm was hinged on the hybrid virtual force field (HVFF) method. Significant modelling constraints herein include the choice of navigation path to selected target points, the possible presence of static obstacles along a desired navigation path and the likelihood of encountering a dynamic obstacle along the robot’s path and the chances of it remaining at this position as a static obstacle hence resulting in a case of re-routing after routing. The proposed algorithm demonstrated a high potential for optimal solution in terms of efficiency and effectiveness.Keywords: multi-target, mobile robot, optimal path, static obstacles, dynamic obstacles
Procedia PDF Downloads 2812126 Experimental Analyses of Thermoelectric Generator Behavior Using Two Types of Thermoelectric Modules for Marine Application
Authors: A. Nour Eddine, D. Chalet, L. Aixala, P. Chessé, X. Faure, N. Hatat
Abstract:
Thermal power technology such as the TEG (Thermo-Electric Generator) arouses significant attention worldwide for waste heat recovery. Despite the potential benefits of marine application due to the permanent heat sink from sea water, no significant studies on this application were to be found. In this study, a test rig has been designed and built to test the performance of the TEG on engine operating points. The TEG device is built from commercially available materials for the sake of possible economical application. Two types of commercial TEM (thermo electric module) have been studied separately on the test rig. The engine data were extracted from a commercial Diesel engine since it shares the same principle in terms of engine efficiency and exhaust with the marine Diesel engine. An open circuit water cooling system is used to replicate the sea water cold source. The characterization tests showed that the silicium-germanium alloys TEM proved a remarkable reliability on all engine operating points, with no significant deterioration of performance even under sever variation in the hot source conditions. The performance of the bismuth-telluride alloys was 100% better than the first type of TEM but it showed a deterioration in power generation when the air temperature exceeds 300 °C. The temperature distribution on the heat exchange surfaces revealed no useful combination of these two types of TEM with this tube length, since the surface temperature difference between both ends is no more than 10 °C. This study exposed the perspective of use of TEG technology for marine engine exhaust heat recovery. Although the results suggested non-sufficient power generation from the low cost commercial TEM used, it provides valuable information about TEG device optimization, including the design of heat exchanger and the types of thermo-electric materials.Keywords: internal combustion engine application, Seebeck, thermo-electricity, waste heat recovery
Procedia PDF Downloads 2442125 Indicators and Sustainability Dimensions of the Mediterranean Diet
Authors: Joana Margarida Bôto, Belmira Neto, Vera Miguéis, Manuela Meireles, Ada Rocha
Abstract:
The Mediterranean diet has been recognized as a sustainable model of living with benefits for the environment and human health. However, a complete assessment of its sustainability, encompassing all dimensions and aspects, to our best knowledge, has not yet been realized. This systematic literature review aimed to fill this gap by identifying and describing the indicators used to assess the sustainability of the Mediterranean diet, looking at several dimensions, and presenting the results from their application. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines methodology was used, and searches were conducted in PubMed, Scopus, Web of Science, and GreenFile. There were identified thirty-two articles evaluating the sustainability of the Mediterranean diet. The environmental impact was quantified in twenty-five of these studies, the nutritional quality was evaluated in seven studies, and the daily cost of the diet was assessed in twelve studies. A total of thirty-three indicators were identified and separated by four dimensions of sustainability, specifically, the environmental dimension (ten indicators, namely carbon, water, and ecological footprint), the nutritional dimension (eight indicators, namely Health score and Nutrient Rich Food Index), the economic dimension (one indicator, the dietary cost), the sociocultural dimension (six indicators – with no results). Only eight of the studies used combined indicators. The Mediterranean diet was considered in all articles as a sustainable dietary pattern with a lower impact than Western diets. The carbon footprint ranged between 0.9 and 6.88 kg CO₂/d per capita, the water footprint between 600 and 5280 m³/d per capita, and the ecological footprint between 2.8 and 53.42 m²/d per capita. The nutritional quality was high, obtaining 122 points using the Health score and 12.95 to 90.6 points using the Nutrient Rich Food Index. The cost of the Mediterranean diet did not significantly differ from other diets and varied between 3.33 and 14.42€/d per capita. A diverse approach to evaluating the sustainability of the Mediterranean diet was found.Keywords: Mediterranean diet, sustainability, environmental indicators, nutritional indicators
Procedia PDF Downloads 992124 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 892123 Pesticides Monitoring in Surface Waters of the São Paulo State, Brazil
Authors: Fabio N. Moreno, Letícia B. Marinho, Beatriz D. Ruiz, Maria Helena R. B. Martins
Abstract:
Brazil is a top consumer of pesticides worldwide, and the São Paulo State is one of the highest consumers among the Brazilian federative states. However, representative data about the occurrence of pesticides in surface waters of the São Paulo State is scarce. This paper aims to present the results of pesticides monitoring executed within the Water Quality Monitoring Network of CETESB (The Environmental Agency of the São Paulo State) between the 2018-2022 period. Surface water sampling points (21 to 25) were selected within basins of predominantly agricultural land-use (5 to 85% of cultivated areas). The samples were collected throughout the year, including high-flow and low-flow conditions. The frequency of sampling varied between 6 to 4 times per year. Selection of pesticide molecules for monitoring followed a prioritizing process from EMBRAPA (Brazilian Agricultural Research Corporation) databases of pesticide use. Pesticides extractions in aqueous samples were performed according to USEPA 3510C and 3546 methods following quality assurance and quality control procedures. Determination of pesticides in water (ng L-1) extracts were performed by high-performance liquid chromatography coupled with mass spectrometry (HPLC-MS) and by gas chromatography with nitrogen phosphorus (GC-NPD) and electron capture detectors (GC-ECD). The results showed higher frequencies (20- 65%) in surface water samples for Carbendazim (fungicide), Diuron/Tebuthiuron (herbicides) and Fipronil/Imidaclopride (insecticides). The frequency of observations for these pesticides were generally higher in monitoring points located in sugarcane cultivated areas. The following pesticides were most frequently quantified above the Aquatic life benchmarks for freshwater (USEPA Office of Pesticide Programs, 2023) or Brazilian Federal Regulatory Standards (CONAMA Resolution no. 357/2005): Atrazine, Imidaclopride, Carbendazim, 2,4D, Fipronil, and Chlorpiryfos. Higher median concentrations for Diuron and Tebuthiuron in the rainy months (october to march) indicated pesticide transport through surface runoff. However, measurable concentrations in the dry season (april to september) for Fipronil and Imidaclopride also indicates pathways related to subsurface or base flow discharge after pesticide soil infiltration and leaching or dry deposition following pesticide air spraying. With exception to Diuron, no temporal trends related to median concentrations of the most frequently quantified pesticides were observed. These results are important to assist policymakers in the development of strategies aiming at reducing pesticides migration to surface waters from agricultural areas. Further studies will be carried out in selected points to investigate potential risks as a result of pesticides exposure on aquatic biota.Keywords: pesticides monitoring, são paulo state, water quality, surface waters
Procedia PDF Downloads 592122 Remote Sensing and Geographic Information Systems for Identifying Water Catchments Areas in the Northwest Coast of Egypt for Sustainable Agricultural Development
Authors: Mohamed Aboelghar, Ayman Abou Hadid, Usama Albehairy, Asmaa Khater
Abstract:
Sustainable agricultural development of the desert areas of Egypt under the pressure of irrigation water scarcity is a significant national challenge. Existing water harvesting techniques on the northwest coast of Egypt do not ensure the optimal use of rainfall for agricultural purposes. Basin-scale hydrology potentialities were studied to investigate how available annual rainfall could be used to increase agricultural production. All data related to agricultural production included in the form of geospatial layers. Thematic classification of Sentinal-2 imagery was carried out to produce the land cover and crop maps following the (FAO) system of land cover classification. Contour lines and spot height points were used to create a digital elevation model (DEM). Then, DEM was used to delineate basins, sub-basins, and water outlet points using the Soil and Water Assessment Tool (Arc SWAT). Main soil units of the study area identified from Land Master Plan maps. Climatic data collected from existing official sources. The amount of precipitation, surface water runoff, potential, and actual evapotranspiration for the years (2004 to 2017) shown as results of (Arc SWAT). The land cover map showed that the two tree crops (olive and fig) cover 195.8 km2 when herbaceous crops (barley and wheat) cover 154 km2. The maximum elevation was 250 meters above sea level when the lowest one was 3 meters below sea level. The study area receives a massive variable amount of precipitation; however, water harvesting methods are inappropriate to store water for purposes.Keywords: water catchements, remote sensing, GIS, sustainable agricultural development
Procedia PDF Downloads 1172121 A Modernist Project: An Analysis on Dupont’s Translations of Faulkner’s Works
Authors: Edilei Reis, Jose Carlos Felix
Abstract:
This paper explores Waldir Dupont’s translations of William Faulkner’s novels to Brazilian Portuguese language in order to comprehend how his translation project regarding Faulkner’s works has addressed modernist traits of the novelist fiction, particularly the ambivalence of language, multiple and fragmented points of view and syntax. Wladir Dupont (1939-2014) was a prolific Brazilian journalist who benefitted from his experiences as an international correspondent living abroad (EUA and Mexico) to become an acclaimed translator later in life. He received a Jabuiti Award (Brazilian most prestigious literary award) for his translation of ‘La Otra Voz’ (1994), by Mexican poet, critic and translator Octavio Paz, a writer to whom he devoted the first years of his carrier as a translator. As Dupont pointed out in some interviews, the struggles in finding a way out to overcome linguistic and cultural obstacles in the process of translating texts from Spanish to Portuguese was paramount for ascertaining his engagement in the long-term project of translating to Brazilian Portuguese the fiction of William Faulkner. His first enterprise was the translation of Faulkner’s trilogy Snopes: The Hamlet (1940) and The Town (1957), the first two novels, were published in 1997 as O povoado and A cidade; in 1999 the last novel, The mansion (1959), was published as A mansão. In 2001, Dupont tackled what is considered one of the most challenging novels by the author due to his use of multiple points of view, As I lay dying (1930). In 2003, The Reivers (1962) was published under the title Os invictos. His enterprise finishes in 2012 with the publication of an anthology of Faulkner’s thriller short-stories Knight’s Gambit (1932) as Lance mortal. Hence, in this paper we will consider the Dupont’s trajectory as a translator, paying special attention to the way in which his identity as such is constituted through the process of translating Faulkner’s works.Keywords: literary translation, translator’s identity, William Faulkner, Wladir DuPont
Procedia PDF Downloads 2502120 Fault Prognostic and Prediction Based on the Importance Degree of Test Point
Authors: Junfeng Yan, Wenkui Hou
Abstract:
Prognostics and Health Management (PHM) is a technology to monitor the equipment status and predict impending faults. It is used to predict the potential fault and provide fault information and track trends of system degradation by capturing characteristics signals. So how to detect characteristics signals is very important. The select of test point plays a very important role in detecting characteristics signal. Traditionally, we use dependency model to select the test point containing the most detecting information. But, facing the large complicated system, the dependency model is not built so easily sometimes and the greater trouble is how to calculate the matrix. Rely on this premise, the paper provide a highly effective method to select test point without dependency model. Because signal flow model is a diagnosis model based on failure mode, which focuses on system’s failure mode and the dependency relationship between the test points and faults. In the signal flow model, a fault information can flow from the beginning to the end. According to the signal flow model, we can find out location and structure information of every test point and module. We break the signal flow model up into serial and parallel parts to obtain the final relationship function between the system’s testability or prediction metrics and test points. Further, through the partial derivatives operation, we can obtain every test point’s importance degree in determining the testability metrics, such as undetected rate, false alarm rate, untrusted rate. This contributes to installing the test point according to the real requirement and also provides a solid foundation for the Prognostics and Health Management. According to the real effect of the practical engineering application, the method is very efficient.Keywords: false alarm rate, importance degree, signal flow model, undetected rate, untrusted rate
Procedia PDF Downloads 3782119 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 712118 Existence of Positive Solutions for Second-Order Difference Equation with Discrete Boundary Value Problem
Authors: Thanin Sitthiwirattham, Jiraporn Reunsumrit
Abstract:
We study the existence of positive solutions to the three points difference summation boundary value problem. We show the existence of at least one positive solution if f is either superlinear or sublinear by applying the fixed point theorem due to Krasnoselskii in cones.Keywords: positive solution, boundary value problem, fixed point theorem, cone
Procedia PDF Downloads 4392117 Dynamic Control Theory: A Behavioral Modeling Approach to Demand Forecasting amongst Office Workers Engaged in a Competition on Energy Shifting
Authors: Akaash Tawade, Manan Khattar, Lucas Spangher, Costas J. Spanos
Abstract:
Many grids are increasing the share of renewable energy in their generation mix, which is causing the energy generation to become less controllable. Buildings, which consume nearly 33% of all energy, are a key target for demand response: i.e., mechanisms for demand to meet supply. Understanding the behavior of office workers is a start towards developing demand response for one sector of building technology. The literature notes that dynamic computational modeling can be predictive of individual action, especially given that occupant behavior is traditionally abstracted from demand forecasting. Recent work founded on Social Cognitive Theory (SCT) has provided a promising conceptual basis for modeling behavior, personal states, and environment using control theoretic principles. Here, an adapted linear dynamical system of latent states and exogenous inputs is proposed to simulate energy demand amongst office workers engaged in a social energy shifting game. The energy shifting competition is implemented in an office in Singapore that is connected to a minigrid of buildings with a consistent 'price signal.' This signal is translated into a 'points signal' by a reinforcement learning (RL) algorithm to influence participant energy use. The dynamic model functions at the intersection of the points signals, baseline energy consumption trends, and SCT behavioral inputs to simulate future outcomes. This study endeavors to analyze how the dynamic model trains an RL agent and, subsequently, the degree of accuracy to which load deferability can be simulated. The results offer a generalizable behavioral model for energy competitions that provides the framework for further research on transfer learning for RL, and more broadly— transactive control.Keywords: energy demand forecasting, social cognitive behavioral modeling, social game, transfer learning
Procedia PDF Downloads 1092116 Fixed Points of Contractive-Like Operators by a Faster Iterative Process
Authors: Safeer Hussain Khan
Abstract:
In this paper, we prove a strong convergence result using a recently introduced iterative process with contractive-like operators. This improves and generalizes corresponding results in the literature in two ways: the iterative process is faster, operators are more general. In the end, we indicate that the results can also be proved with the iterative process with error terms.Keywords: contractive-like operator, iterative process, fixed point, strong convergence
Procedia PDF Downloads 434