Search results for: uniform error
86 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 10085 Structural Characteristics of HPDSP Concrete on Beam Column Joints
Authors: Hari Krishan Sharma, Sanjay Kumar Sharma, Sushil Kumar Swar
Abstract:
Inadequate transverse reinforcement is considered as the main reason for the beam column joint shear failure observed during recent earthquakes. DSP matrix consists of cement and high content of micro-silica with low water to cement ratio while the aggregates are graded quartz sand. The use of reinforcing fibres leads not only to the increase of tensile/bending strength and specific fracture energy, but also to reduction of brittleness and, consequently, to production of non-explosive ruptures. Besides, fibre-reinforced materials are more homogeneous and less sensitive to small defects and flaws. Recent works on the freeze-thaw durability (also in the presence of de-icing salts) of fibre-reinforced DSP confirm the excellent behaviour in the expected long term service life.DSP materials, including fibre-reinforced DSP and CRC (Compact Reinforced Composites) are obtained by using high quantities of super plasticizers and high volumes of micro-silica. Steel fibres with high tensile yield strength of smaller diameter and short length in different fibre volume percentage and aspect ratio tilized to improve the performance by reducing the brittleness of matrix material. In the case of High Performance Densified Small Particle Concrete (HPDSPC), concrete is dense at the micro-structure level, tensile strain would be much higher than that of the conventional SFRC, SIFCON & SIMCON. Beam-column sub-assemblages used as moment resisting constructed using HPDSPC in the joint region with varying quantities of steel fibres, fibre aspect ratio and fibre orientation in the critical section. These HPDSPC in the joint region sub-assemblages tested under cyclic/earthquake loading. Besides loading measurements, frame displacements, diagonal joint strain and rebar strain adjacent to the joint will also be measured to investigate stress-strain behaviour, load deformation characteristics, joint shear strength, failure mechanism, ductility associated parameters, stiffness and energy dissipated parameters of the beam column sub-assemblages also evaluated. Finally a design procedure for the optimum design of HPDSPC corresponding to moment, shear forces and axial forces for the reinforced concrete beam-column joint sub-assemblage proposed. The fact that the implementation of material brittleness measure in the design of RC structures can improve structural reliability by providing uniform safety margins over a wide range of structural sizes and material compositions well recognized in the structural design and research. This lead to the development of high performance concrete for the optimized combination of various structural ratios in concrete for the optimized combination of various structural properties. The structural applications of HPDSPC, because of extremely high strength, will reduce dead load significantly as compared to normal weight concrete thereby offering substantial cost saving and by providing improved seismic response, longer spans, and thinner sections, less reinforcing steel and lower foundation cost. These cost effective parameters will make this material more versatile for use in various structural applications like beam-column joints in industries, airports, parking areas, docks, harbours, and also containers for hazardous material, safety boxes and mould & tools for polymer composites and metals.Keywords: high performance densified small particle concrete (HPDSPC), steel fibre reinforced concrete (SFRC), slurry infiltrated concrete (SIFCON), Slurry infiltrated mat concrete (SIMCON)
Procedia PDF Downloads 30384 The Importance of Dialogue, Self-Respect, and Cultural Etiquette in Multicultural Society: An Islamic and Secular Perspective
Authors: Julia A. Ermakova
Abstract:
In today's multicultural societies, dialogue, self-respect, and cultural etiquette play a vital role in fostering mutual respect and understanding. Whether viewed from an Islamic or secular perspective, the importance of these values cannot be overstated. Firstly, dialogue is essential in multicultural societies as it allows individuals from different cultural backgrounds to exchange ideas, opinions, and experiences. To engage in dialogue, one must be open and willing to listen, understand, and respect the views of others. This requires a level of self-awareness, where individuals must know themselves and their interlocutors to create a productive and respectful conversation. Secondly, self-respect is crucial for individuals living in multicultural societies (McLarney). One must have adequately high self-esteem and self-confidence to interact with others positively. By valuing oneself, individuals can create healthy relationships and foster mutual respect, which is essential in diverse communities. Thirdly, cultural etiquette is a way of demonstrating the beauty of one's culture by exhibiting good temperament (Al-Ghazali). Adab, a concept that encompasses good manners, praiseworthy words and deeds, and the pursuit of what is considered good, is highly valued in Islamic teachings. By adhering to Adab, individuals can guard against making mistakes and demonstrate respect for others. Islamic teachings provide etiquette for every situation in life, making up the way of life for Muslims. In the Islamic view, an elegant Muslim woman has several essential qualities, including cultural speech and erudition, speaking style, awareness of how to greet, the ability to receive compliments, lack of desire to argue, polite behavior, avoiding personal insults, and having good intentions (Al-Ghazali). The Quran highlights the inclination of people towards arguing, bickering, and disputes (Qur'an, 4:114). Therefore, it is imperative to avoid useless arguments and disputes, for they are poison that poisons our lives. The Prophet Muhammad, peace and blessings be upon him, warned that the most hateful person to Allah is an irreconcilable disputant (Al-Ghazali). By refraining from such behavior, individuals can foster respect and understanding in multicultural societies. From a secular perspective, respecting the views of others is crucial to engage in productive dialogue. The rule of argument emphasizes the importance of showing respect for the other person's views, allowing for the possibility of error on one's part, and avoiding telling someone they are wrong (Atamali). By exhibiting polite behavior and having respect for everyone, individuals can create a welcoming environment and avoid conflict. In conclusion, the importance of dialogue, self-respect, and cultural etiquette in multicultural societies cannot be overstated. By engaging in dialogue, respecting oneself and others, and adhering to cultural etiquette, individuals can foster mutual respect and understanding in diverse communities. Whether viewed from an Islamic or secular perspective, these values are essential for creating harmonious societies.Keywords: multiculturalism, self-respect, cultural etiquette, adab, ethics, secular perspective
Procedia PDF Downloads 8883 Ultra-Rapid and Efficient Immunomagnetic Separation of Listeria Monocytogenes from Complex Samples in High-Gradient Magnetic Field Using Disposable Magnetic Microfluidic Device
Authors: L. Malic, X. Zhang, D. Brassard, L. Clime, J. Daoud, C. Luebbert, V. Barrere, A. Boutin, S. Bidawid, N. Corneau, J. Farber, T. Veres
Abstract:
The incidence of infections caused by foodborne pathogens such as Listeria monocytogenes (L. monocytogenes) poses a great potential threat to public health and safety. These issues are further exacerbated by legal repercussions due to “zero tolerance” food safety standards adopted in developed countries. Unfortunately, a large number of related disease outbreaks are caused by pathogens present in extremely low counts currently undetectable by available techniques. The development of highly sensitive and rapid detection of foodborne pathogens is therefore crucial, and requires robust and efficient pre-analytical sample preparation. Immunomagnetic separation is a popular approach to sample preparation. Microfluidic chips combined with external magnets have emerged as viable high throughput methods. However, external magnets alone are not suitable for the capture of nanoparticles, as very strong magnetic fields are required. Devices that incorporate externally applied magnetic field and microstructures of a soft magnetic material have thus been used for local field amplification. Unfortunately, very complex and costly fabrication processes used for integration of soft magnetic materials in the reported proof-of-concept devices would prohibit their use as disposable tools for food and water safety or diagnostic applications. We present a sample preparation magnetic microfluidic device implemented in low-cost thermoplastic polymers using fabrication techniques suitable for mass-production. The developed magnetic capture chip (M-chip) was employed for rapid capture and release of L. monocytogenes conjugated to immunomagnetic nanoparticles (IMNs) in buffer and beef filtrate. The M-chip relies on a dense array of Nickel-coated high-aspect ratio pillars for capture with controlled magnetic field distribution and a microfluidic channel network for sample delivery, waste, wash and recovery. The developed Nickel-coating process and passivation allows generation of switchable local perturbations within the uniform magnetic field generated with a pair of permanent magnets placed at the opposite edges of the chip. This leads to strong and reversible trapping force, wherein high local magnetic field gradients allow efficient capture of IMNs conjugated to L. monocytogenes flowing through the microfluidic chamber. The experimental optimization of the M-chip was performed using commercially available magnetic microparticles and fabricated silica-coated iron-oxide nanoparticles. The fabricated nanoparticles were optimized to achieve the desired magnetic moment and surface functionalization was tailored to allow efficient capture antibody immobilization. The integration, validation and further optimization of the capture and release protocol is demonstrated using both, dead and live L. monocytogenes through fluorescence microscopy and plate- culture method. The capture efficiency of the chip was found to vary as function of listeria to nanoparticle concentration ratio. The maximum capture efficiency of 30% was obtained and the 24-hour plate-culture method allowed the detection of initial sample concentration of only 16 cfu/ml. The device was also very efficient in concentrating the sample from a 10 ml initial volume. Specifically, 280% concentration efficiency was achieved in 17 minutes only, demonstrating the suitability of the system for food safety applications. In addition, flexible design and low-cost fabrication process will allow rapid sample preparation for applications beyond food and water safety, including point-of-care diagnosis.Keywords: array of pillars, bacteria isolation, immunomagnetic sample preparation, polymer microfluidic device
Procedia PDF Downloads 28082 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms
Authors: Hai L. Tran
Abstract:
When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.Keywords: data, narrative, number, anecdote, storytelling, news
Procedia PDF Downloads 7981 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland
Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski
Abstract:
Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics
Procedia PDF Downloads 40180 User Experience Evaluation on the Usage of Commuter Line Train Ticket Vending Machine
Authors: Faishal Muhammad, Erlinda Muslim, Nadia Faradilla, Sayidul Fikri
Abstract:
To deal with the increase of mass transportation needs problem, PT. Kereta Commuter Jabodetabek (KCJ) implements Commuter Vending Machine (C-VIM) as the solution. For that background, C-VIM is implemented as a substitute to the conventional ticket windows with the purposes to make transaction process more efficient and to introduce self-service technology to the commuter line user. However, this implementation causing problems and long queues when the user is not accustomed to using the machine. The objective of this research is to evaluate user experience after using the commuter vending machine. The goal is to analyze the existing user experience problem and to achieve a better user experience design. The evaluation method is done by giving task scenario according to the features offered by the machine. The features are daily insured ticket sales, ticket refund, and multi-trip card top up. There 20 peoples that separated into two groups of respondents involved in this research, which consist of 5 males and 5 females each group. The experienced and inexperienced user to prove that there is a significant difference between both groups in the measurement. The user experience is measured by both quantitative and qualitative measurement. The quantitative measurement includes the user performance metrics such as task success, time on task, error, efficiency, and learnability. The qualitative measurement includes system usability scale questionnaire (SUS), questionnaire for user interface satisfaction (QUIS), and retrospective think aloud (RTA). Usability performance metrics shows that 4 out of 5 indicators are significantly different in both group. This shows that the inexperienced group is having a problem when using the C-VIM. Conventional ticket windows also show a better usability performance metrics compared to the C-VIM. From the data processing, the experienced group give the SUS score of 62 with the acceptability scale of 'marginal low', grade scale of “D”, and the adjective ratings of 'good' while the inexperienced group gives the SUS score of 51 with the acceptability scale of 'marginal low', grade scale of 'F', and the adjective ratings of 'ok'. This shows that both groups give a low score on the system usability scale. The QUIS score of the experienced group is 69,18 and the inexperienced group is 64,20. This shows the average QUIS score below 70 which indicate a problem with the user interface. RTA was done to obtain user experience issue when using C-VIM through interview protocols. The issue obtained then sorted using pareto concept and diagram. The solution of this research is interface redesign using activity relationship chart. This method resulted in a better interface with an average SUS score of 72,25, with the acceptable scale of 'acceptable', grade scale of 'B', and the adjective ratings of 'excellent'. From the time on task indicator of performance metrics also shows a significant better time by using the new interface design. Result in this study shows that C-VIM not yet have a good performance and user experience.Keywords: activity relationship chart, commuter line vending machine, system usability scale, usability performance metrics, user experience evaluation
Procedia PDF Downloads 26279 Lake Water Surface Variations and Its Influencing Factors in Tibetan Plateau in Recent 10 Years
Authors: Shanlong Lu, Jiming Jin, Xiaochun Wang
Abstract:
The Tibetan Plateau has the largest number of inland lakes with the highest elevation on the planet. These massive and large lakes are mostly in natural state and are less affected by human activities. Their shrinking or expansion can truly reflect regional climate and environmental changes and are sensitive indicators of global climate change. However, due to the sparsely populated nature of the plateau and the poor natural conditions, it is difficult to effectively obtain the change data of the lake, which has affected people's understanding of the temporal and spatial processes of lake water changes and their influencing factors. By using the MODIS (Moderate Resolution Imaging Spectroradiometer) MOD09Q1 surface reflectance images as basic data, this study produced the 8-day lake water surface data set of the Tibetan Plateau from 2000 to 2012 at 250 m spatial resolution, with a lake water surface extraction method of combined with lake water surface boundary buffer analyzing and lake by lake segmentation threshold determining. Then based on the dataset, the lake water surface variations and their influencing factors were analyzed, by using 4 typical natural geographical zones of Eastern Qinghai and Qilian, Southern Qinghai, Qiangtang, and Southern Tibet, and the watersheds of the top 10 lakes of Qinghai, Siling Co, Namco, Zhari NamCo, Tangra Yumco, Ngoring, UlanUla, Yamdrok Tso, Har and Gyaring as the analysis units. The accuracy analysis indicate that compared with water surface data of the 134 sample lakes extracted from the 30 m Landsat TM (Thematic Mapper ) images, the average overall accuracy of the lake water surface data set is 91.81% with average commission and omission error of 3.26% and 5.38%; the results also show strong linear (R2=0.9991) correlation with the global MODIS water mask dataset with overall accuracy of 86.30%; and the lake area difference between the Second National Lake Survey and this study is only 4.74%, respectively. This study provides reliable dataset for the lake change research of the plateau in the recent decade. The change trends and influencing factors analysis indicate that the total water surface area of lakes in the plateau showed overall increases, but only lakes with areas larger than 10 km2 had statistically significant increases. Furthermore, lakes with area larger than 100 km2 experienced an abrupt change in 2005. In addition, the annual average precipitation of Southern Tibet and Southern Qinghai experienced significant increasing and decreasing trends, and corresponding abrupt changes in 2004 and 2006, respectively. The annual average temperature of Southern Tibet and Qiangtang showed a significant increasing trend with an abrupt change in 2004. The major reason for the lake water surface variation in Eastern Qinghai and Qilian, Southern Qinghai and Southern Tibet is the changes of precipitation, and that for Qiangtang is the temperature variations.Keywords: lake water surface variation, MODIS MOD09Q1, remote sensing, Tibetan Plateau
Procedia PDF Downloads 23178 Starting the Hospitalization Procedure with a Medicine Combination in the Cardiovascular Department of the Imam Reza (AS) Mashhad Hospital
Authors: Maryamsadat Habibi
Abstract:
Objective: pharmaceutical errors are avoidable occurrences that can result in inappropriate pharmaceutical use, patient harm, treatment failure, increased hospital costs and length of stay, and other outcomes that affect both the individual receiving treatment and the healthcare provider. This study aimed to perform a reconciliation of medications in the cardiovascular ward of Imam Reza Hospital in Mashhad, Iran, and evaluate the prevalence of medication discrepancies between the best medication list created for the patient by the pharmacist and the medication order of the treating physician there. Materials & Methods: The 97 patients in the cardiovascular ward of the Imam Reza Hospital in Mashhad were the subject of a cross-sectional study from June to September of 2021. After giving their informed consent and being admitted to the ward, all patients with at least one underlying condition and at least two medications being taken at home were included in the study. A medical reconciliation form was used to record patient demographics and medical histories during the first 24 hours of admission, and the information was contrasted with the doctors' orders. The doctor then discovered medication inconsistencies between the two lists and double-checked them to separate the intentional from the accidental anomalies. Finally, using SPSS software version 22, it was determined how common medical discrepancies are and how different sorts of discrepancies relate to various variables. Results: The average age of the participants in this study was 57.6915.84 years, with 57.7% of men and 42.3% of women. 95.9% of the patients among these people encountered at least one medication discrepancy, and 58.9% of them suffered at least one unintentional drug cessation. Out of the 659 medications registered in the study, 399 cases (60.54%) had inconsistencies, of which 161 cases (40.35%) involved the intentional stopping of a medication, 123 cases (30.82%) involved the stopping of a medication unintentionally, and 115 cases (28.82%) involved the continued use of a medication by adjusting the dose. Additionally, the category of cardiovascular pharmaceuticals and the category of gastrointestinal medications were found to have the highest medical inconsistencies in the current study. Furthermore, there was no correlation between the frequency of medical discrepancies and the following variables: age, ward, date of visit, type, and number of underlying diseases (P=0.13), P=0.61, P=0.72, P=0.82, P=0.44, and so forth. On the other hand, there was a statistically significant correlation between the number of medications taken at home (P=0.037) and the prevalence of medical discrepancies with gender (P=0.029). The results of this study revealed that 96% of patients admitted to the cardiovascular unit at Imam Reza Hospital had at least one medication error, which was typically an intentional drug discontinuance. According to the study's findings, patients admitted to Imam Reza Hospital's cardiovascular ward have a great potential for identifying and correcting various medication discrepancies as well as for avoiding prescription errors when the medication reconciliation method is used. As a result, it is essential to carry out a precise assessment to achieve the best treatment outcomes and avoid unintended medication discontinuation, unwanted drug-related events, and drug interactions between the patient's home medications and those prescribed in the hospital.Keywords: drug combination, drug side effects, drug incompatibility, cardiovascular department
Procedia PDF Downloads 8877 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review
Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni
Abstract:
Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing
Procedia PDF Downloads 7176 Application of NBR 14861: 2011 for the Design of Prestress Hollow Core Slabs Subjected to Shear
Authors: Alessandra Aparecida Vieira França, Adriana de Paula Lacerda Santos, Mauro Lacerda Santos Filho
Abstract:
The purpose of this research i to study the behavior of precast prestressed hollow core slabs subjected to shear. In order to achieve this goal, shear tests were performed using hollow core slabs 26,5cm thick, with and without a concrete cover of 5 cm, without cores filled, with two cores filled and three cores filled with concrete. The tests were performed according to the procedures recommended by FIP (1992), the EN 1168:2005 and following the method presented in Costa (2009). The ultimate shear strength obtained within the tests was compared with the values of theoretical resistant shear calculated in accordance with the codes, which are being used in Brazil, noted: NBR 6118:2003 and NBR 14861:2011. When calculating the shear resistance through the equations presented in NBR 14861:2011, it was found that provision is much more accurate for the calculation of the shear strength of hollow core slabs than the NBR 6118 code. Due to the large difference between the calculated results, even for slabs without cores filled, the authors consulted the committee that drafted the NBR 14861:2011 and found that there is an error in the text of the standard, because the coefficient that is suggested, actually presents the double value than the needed one! The ABNT, later on, soon issued an amendment of NBR 14861:2011 with the necessary corrections. During the tests for the present study, it was confirmed that the concrete filling the cores contributes to increase the shear strength of hollow core slabs. But in case of slabs 26,5 cm thick, the quantity should be limited to a maximum of two cores filled, because most of the results for slabs with three cores filled were smaller. This confirmed the recommendation of NBR 14861:2011which is consistent with standard practice. After analyzing the configuration of cracking and failure mechanisms of hollow core slabs during the shear tests, strut and tie models were developed representing the forces acting on the slab at the moment of rupture. Through these models the authors were able to calculate the tensile stress acting on the concrete ties (ribs) and scaled the geometry of these ties. The conclusions of the research performed are the experiments results have shown that the mechanism of failure of the hollow-core slabs can be predicted using the strut-and-tie procedure, within a good range of accuracy. In addition, the needed of the correction of the Brazilian standard to review the correction factor σcp duplicated (in NBR14861/2011), and the limitation of the number of cores (Holes) to be filled with concrete, to increase the strength of the slab for the shear resistance. It is also suggested the increasing the amount of test results with 26.5 cm thick, and a larger range of thickness slabs, in order to obtain results of shear tests with cores concreted after the release of prestressing force. Another set of shear tests on slabs must be performed in slabs with cores filled and cover concrete reinforced with welded steel mesh for comparison with results of theoretical values calculated by the new revision of the standard NBR 14861:2011.Keywords: prestressed hollow core slabs, shear, strut, tie models
Procedia PDF Downloads 33375 Influence of Temperature and Immersion on the Behavior of a Polymer Composite
Authors: Quentin C.P. Bourgogne, Vanessa Bouchart, Pierre Chevrier, Emmanuel Dattoli
Abstract:
This study presents an experimental and theoretical work conducted on a PolyPhenylene Sulfide reinforced with 40%wt of short glass fibers (PPS GF40) and its matrix. Thermoplastics are widely used in the automotive industry to lightweight automotive parts. The replacement of metallic parts by thermoplastics is reaching under-the-hood parts, near the engine. In this area, the parts are subjected to high temperatures and are immersed in cooling liquid. This liquid is composed of water and glycol and can affect the mechanical properties of the composite. The aim of this work was thus to quantify the evolution of mechanical properties of the thermoplastic composite, as a function of temperature and liquid aging effects, in order to develop a reliable design of parts. An experimental campaign in the tensile mode was carried out at different temperatures and for various glycol proportions in the cooling liquid, for monotonic and cyclic loadings on a neat and a reinforced PPS. The results of these tests allowed to highlight some of the main physical phenomena occurring during these solicitations under tough hydro-thermal conditions. Indeed, the performed tests showed that temperature and liquid cooling aging can affect the mechanical behavior of the material in several ways. The more the cooling liquid contains water, the more the mechanical behavior is affected. It was observed that PPS showed a higher sensitivity to absorption than to chemical aggressiveness of the cooling liquid, explaining this dominant sensitivity. Two kinds of behaviors were noted: an elasto-plastic type under the glass transition temperature and a visco-pseudo-plastic one above it. It was also shown that viscosity is the leading phenomenon above the glass transition temperature for the PPS and could also be important under this temperature, mostly under cyclic conditions and when the stress rate is low. Finally, it was observed that soliciting this composite at high temperatures is decreasing the advantages of the presence of fibers. A new phenomenological model was then built to take into account these experimental observations. This new model allowed the prediction of the evolution of mechanical properties as a function of the loading environment, with a reduced number of parameters compared to precedent studies. It was also shown that the presented approach enables the description and the prediction of the mechanical response with very good accuracy (2% of average error at worst), over a wide range of hydrothermal conditions. A temperature-humidity equivalence principle was underlined for the PPS, allowing the consideration of aging effects within the proposed model. Then, a limit of improvement of the reachable accuracy was determinate for all models using this set of data by the application of an artificial intelligence-based model allowing a comparison between artificial intelligence-based models and phenomenological based ones.Keywords: aging, analytical modeling, mechanical testing, polymer matrix composites, sequential model, thermomechanical
Procedia PDF Downloads 11674 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 4273 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models
Authors: Haya Salah, Srinivas Sharan
Abstract:
Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time
Procedia PDF Downloads 12172 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling
Authors: Alastair Hales, Xi Jiang
Abstract:
Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics
Procedia PDF Downloads 22171 Social Economic Factors Associated with the Nutritional Status of Children In Western Uganda
Authors: Baguma Daniel Kajura
Abstract:
The study explores socio-economic factors, health related and individual factors that influence the breastfeeding habits of mothers and their effect on the nutritional status of their infants in the Rwenzori region of Western Uganda. A cross-sectional research design was adopted, and it involved the use of self-administered questionnaires, interview guides, and focused group discussion guides to assess the extent to which socio-demographic factors associated with breastfeeding practices influence child malnutrition. Using this design, data was collected from 276 mother-paired infants out of the selected 318 mother-paired infants over a period of ten days. Using a sample size formula by Kish Leslie for cross-sectional studies N= Zα2 P (1- P) / δ2, where N= sample size estimate of paired mother paired infants. P= assumed true population prevalence of mother–paired infants with malnutrition cases, P = 29.3%. 1-P = the probability of mother-paired infants not having malnutrition, so 1-P = 70.7% Zα = Standard normal deviation at 95% confidence interval corresponding to 1.96.δ = Absolute error between the estimated and true population prevalence of malnutrition of 5%. The calculated sample size N = 1.96 × 1.96 (0.293 × 0.707) /0,052= 318 mother paired infants. Demographic and socio-economic data for all mothers were entered into Microsoft Excel software and then exported to STATA 14 (StataCorp, 2015). Anthropometric measurements were taken for all children by the researcher and the trained assistants who physically weighed the children. The use of immunization card was used to attain the age of the child. The bivariate logistic regression analysis was used to assess the relationship between socio-demographic factors associated with breastfeeding practices and child malnutrition. The multivariable regression analysis was used to draw a conclusion on whether or not there are any true relationships between the socio-demographic factors associated with breastfeeding practices as independent variables and child stunting and underweight as dependent variables in relation to breastfeeding practices. Descriptive statistics on background characteristics of the mothers were generated and presented in frequency distribution tables. Frequencies and means were computed, and the results were presented using tables, then, we determined the distribution of stunting and underweight among infants by the socioeconomic and demographic factors. Findings reveal that children of mothers who used milk substitutes besides breastfeeding are over two times more likely to be stunted compared to those whose mothers exclusively breastfed them. Feeding children with milk substitutes instead of breastmilk predisposes them to both stunting and underweight. Children of mothers between 18 and 34 years of age are less likely to be underweight, as were those who were breastfed over ten times a day. The study further reveals that 55% of the children were underweight, and 49% were stunted. Of the underweight children, an equal number (58/151) were either mildly or moderately underweight (38%), and 23% (35/151) were severely underweight. Empowering community outreach programs by increasing knowledge and increased access to services on integrated management of child malnutrition is crucial to curbing child malnutrition in rural areas.Keywords: infant and young child feeding, breastfeeding, child malnutrition, maternal health
Procedia PDF Downloads 2070 Event-Related Potentials and Behavioral Reactions during Native and Foreign Languages Comprehension in Bilingual Inhabitants of Siberia
Authors: Tatiana N. Astakhova, Alexander E. Saprygin, Tatyana A. Golovko, Alexander N. Savostyanov, Mikhail S. Vlasov, Natalia V. Borisova, Alexandera G. Karpova, Urana N. Kavai-ool, Elena D. Mokur-ool, Nikolay A. Kolchanov, Lubomir I. Aftanas
Abstract:
The study is dedicated to the research of brain activity in bilingual inhabitants of Siberia. We compared behavioral reactions and event-related potentials in Turkic-speaking inhabitants of Siberia (Tuvinians and Yakuts) and Russians. 63 healthy aboriginals of the Tyva Republic, 29 inhabitants of the Sakha (Yakutia) Republic, and 55 Russians from Novosibirsk participated in the study. All the healthy and right-handed participants, matched on age and sex, were students of different universities. EEG’s were recorded during the solving of linguistic tasks. In these tasks, participants had to find a syntax error in the written sentences. There were four groups of sentences: Russian, English, Tuvinian, and Yakut. All participants completed the tasks in Russian and English. Additionally, Tuvinians and Yakuts completed the tasks in Tuvinian or Yakut respectively. For Russians, EEG's were recorded using 128-channels according to the extended International 10-10 system, and the signals were amplified using “Neuroscan (USA)” amplifiers. For Tuvinians and Yakuts, EEG's were recorded using 64-channels and amplifiers Brain Products, Germany. In all groups, 0.3-100 Hz analog filtering and sampling rate 1000 Hz were used. As parameters of behavioral reactions, response speed and the accuracy of recognition were used. Event-related potentials (ERP) responses P300 and P600 were used as indicators of brain activity. The behavioral reactions showed that in Russians, the response speed for Russian was faster than for English. Also, the accuracy of solving tasks was higher for Russian than for English. The peak P300 in Russians were higher for English, the peak P600 in the left temporal cortex were higher for the Russian language. Both Tuvinians and Yakuts have no difference in accuracy of solving tasks in Russian and in their respective national languages. However, the response speed was faster for tasks in Russian than for tasks in their national language. Tuvinians and Yakuts showed bad accuracy in English, but the response speed was higher for English than for Russian and the national languages. This can be explained by the fact that they did not think carefully and gave a random answer for English. In Tuvinians, The P300 and P600 amplitudes and cortical topology were the same for Russian and Tuvinian and different for English. In Yakuts, the P300 and P600 amplitudes and topology of ERP for Russian were the same as what Russians had for Russian. In Yakuts, brain reactions during Yakut and English comprehension had no difference, and were reflected to foreign language comprehension - while the Russian language comprehension was reflected to native language comprehension. We found out that the Tuvinians recognized both Russian and Tuvinian as native languages, and English as a foreign language. The Yakuts recognized both English and Yakut as a foreign language, and only Russian as a native language. According to the inquirer, both Tuvinians and Yakuts use the national language as a spoken language, whereas they don’t use it for writing. It can well be a reason that Yakuts perceive the Yakut writing language as a foreign language while writing Russian as their native.Keywords: EEG, ERP, native and foreign languages comprehension, Siberian inhabitants
Procedia PDF Downloads 56169 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 4768 A Comparison of Two and Three Dimensional Motion Capture Methodologies in the Analysis of Underwater Fly Kicking Kinematics
Authors: Isobel M. Thompson, Dorian Audot, Dominic Hudson, Martin Warner, Joseph Banks
Abstract:
Underwater fly kick is an essential skill in swimming, which can have a considerable impact upon overall race performance in competition, especially in sprint events. Reduced wave drags acting upon the body under the surface means that the underwater fly kick will potentially be the fastest the swimmer is travelling throughout the race. It is therefore critical to understand fly kicking techniques and determining biomechanical factors involved in the performance. Most previous studies assessing fly kick kinematics have focused on two-dimensional analysis; therefore, the three-dimensional elements of the underwater fly kick techniques are not well understood. Those studies that have investigated fly kicking techniques using three-dimensional methodologies have not reported full three-dimensional kinematics for the techniques observed, choosing to focus on one or two joints. There has not been a direct comparison completed on the results obtained using two-dimensional and three-dimensional analysis, and how these different approaches might affect the interpretation of subsequent results. The aim of this research is to quantify the differences in kinematics observed in underwater fly kicks obtained from both two and three-dimensional analyses of the same test conditions. In order to achieve this, a six-camera underwater Qualisys system was used to develop an experimental methodology suitable for assessing the kinematics of swimmer’s starts and turns. The cameras, capturing at a frequency of 100Hz, were arranged along the side of the pool spaced equally up to 20m creating a capture volume of 7m x 2m x 1.5m. Within the measurement volume, error levels were estimated at 0.8%. Prior to pool trials, participants completed a landside calibration in order to define joint center locations, as certain markers became occluded once the swimmer assumed the underwater fly kick position in the pool. Thirty-four reflective markers were placed on key anatomical landmarks, 9 of which were then removed for the pool-based trials. The fly-kick swimming conditions included in the analysis are as follows: maximum effort prone, 100m pace prone, 200m pace prone, 400m pace prone, and maximum pace supine. All trials were completed from a push start to 15m to ensure consistent kick cycles were captured. Both two-dimensional and three-dimensional kinematics are calculated from joint locations, and the results are compared. Key variables reported include kick frequency and kick amplitude, as well as full angular kinematics of the lower body. Key differences in these variables obtained from two-dimensional and three-dimensional analysis are identified. Internal rotation (up to 15º) and external rotation (up to -28º) were observed using three-dimensional methods. Abduction (5º) and adduction (15º) were also reported. These motions are not observed in the two-dimensional analysis. Results also give an indication of different techniques adopted by swimmers at various paces and orientations. The results of this research provide evidence of the strengths of both two dimensional and three dimensional motion capture methods in underwater fly kick, highlighting limitations which could affect the interpretation of results from both methods.Keywords: swimming, underwater fly kick, performance, motion capture
Procedia PDF Downloads 13367 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines
Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri
Abstract:
This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.Keywords: wind turbines, aeroelasticity, repetitive control, periodic systems
Procedia PDF Downloads 24966 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System
Authors: Masoud Mirzaee, Ghobad Behzadi Pour
Abstract:
An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure
Procedia PDF Downloads 24965 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89
Authors: A. Chatel, I. S. Torreguitart, T. Verstraete
Abstract:
The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness
Procedia PDF Downloads 11064 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land
Authors: Elissandro Voigt Beier, Cristiano Poleto
Abstract:
The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land
Procedia PDF Downloads 27763 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis
Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García
Abstract:
Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis
Procedia PDF Downloads 22662 Two-wavelength High-energy Cr:LiCaAlF6 MOPA Laser System for Medical Multispectral Optoacoustic Tomography
Authors: Radik D. Aglyamov, Alexander K. Naumov, Alexey A. Shavelev, Oleg A. Morozov, Arsenij D. Shishkin, Yury P.Brodnikovsky, Alexander A.Karabutov, Alexander A. Oraevsky, Vadim V. Semashko
Abstract:
The development of medical optoacoustic tomography with the using human blood as endogenic contrast agent is constrained by the lack of reliable, easy-to-use and inexpensive sources of high-power pulsed laser radiation in the spectral region of 750-900 nm [1-2]. Currently used titanium-sapphire, alexandrite lasers or optical parametric light oscillators do not provide the required and stable output characteristics, they are structurally complex, and their cost is up to half the price of diagnostic optoacoustic systems. Here we are developing the lasers based on Cr:LiCaAlF6 crystals which are free of abovementioned disadvantages and provides intensive ten’s ns-range tunable laser radiation at specific absorption bands of oxy- (~840 nm) and -deoxyhemoglobin (~757 nm) in the blood. Cr:LiCAF (с=3 at.%) crystals were grown in Kazan Federal University by the vertical directional crystallization (Bridgman technique) in graphite crucibles in a fluorinating atmosphere at argon overpressure (P=1500 hPa) [3]. The laser elements have cylinder shape with the diameter of 8 mm and 90 mm in length. The direction of the optical axis of the crystal was normal to the cylinder generatrix, which provides the π-polarized laser action correspondent to maximal stimulated emission cross-section. The flat working surfaces of the active elements were polished and parallel to each other with an error less than 10”. No any antireflection coating was applied. The Q-switched master oscillator-power amplifiers laser system (MOPA) with the dual-Xenon flashlamp pumping scheme in diffuse-reflectivity close-coupled head were realized. A specially designed laser cavity, consisting of dielectric highly reflective reflectors with a 2 m-curvature radius, a flat output mirror, a polarizer and Q-switch sell, makes it possible to operate sequentially in a circle (50 ns - laser one pulse after another) at wavelengths of 757 and 840 nm. The programmable pumping system from Tomowave Laser LLC (Russia) provided independent to each pulses (up to 250 J at 180 μs) pumping to equalize the laser radiation intensity at these wavelengths. The MOPA laser operates at 10 Hz pulse repetition rate with the output energy up to 210 mJ. Taking into account the limitations associated with physiological movements and other characteristics of patient tissues, the duration of laser pulses and their energy allows molecular and functional high-contrast imaging to depths of 5-6 cm with a spatial resolution of at least 1 mm. Highly likely the further comprehensive design of laser allows improving the output properties and realizing better spatial resolution of medical multispectral optoacoustic tomography systems.Keywords: medical optoacoustic, endogenic contrast agent, multiwavelength tunable pulse lasers, MOPA laser system
Procedia PDF Downloads 10161 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)
Authors: Eric Pla Erra, Mariana Jimenez Martinez
Abstract:
While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)
Procedia PDF Downloads 10560 Ternary Organic Blend for Semitransparent Solar Cells with Enhanced Short Circuit Current Density
Authors: Mohammed Makha, Jakob Heier, Frank Nüesch, Roland Hany
Abstract:
Organic solar cells (OSCs) have made rapid progress and currently achieve power conversion efficiencies (PCE) of over 10%. OSCs have several merits over other direct light-to-electricity generating cells and can be processed at low cost from solution on flexible substrates over large areas. Moreover, combining organic semiconductors with transparent and conductive electrodes allows for the fabrication of semitransparent OSCs (SM-OSCs). For SM-OSCs the challenge is to achieve a high average visible transmission (AVT) while maintaining a high short circuit current (Jsc). Typically, Jsc of SM-OSCs is smaller than when using an opaque metal top electrode. This is because the non-absorbed light during the first transit through the active layer and the transparent electrode is forward-transmitted out of the device. Recently, OSCs using a ternary blend of organic materials have received attention. This strategy was pursued to extend the light harvesting over the visible range. However, it is a general challenge to manipulate the performance of ternary OSCs in a predictable way, because many key factors affect the charge generation and extraction in ternary solar cells. Consequently, the device performance is affected by the compatibility between the blend components and the resulting film morphology, the energy levels and bandgaps, the concentration of the guest material and its location in the active layer. In this work, we report on a solvent-free lamination process for the fabrication of efficient and semitransparent ternary blend OSCs. The ternary blend was composed of PC70BM and the electron donors PBDTTT-C and an NIR cyanine absorbing dye (Cy7T). Using an opaque metal top electrode, a PCE of 6% was achieved for the optimized binary polymer: fullerene blend (AVT = 56%). However, the PCE dropped to ~2% when decreasing (to 30 nm) the active film thickness to increase the AVT value (75%). Therefore we resorted to the ternary blend and measured for non-transparent cells a PCE of 5.5% when using an active polymer: dye: fullerene (0.7: 0.3: 1.5 wt:wt:wt) film of 95 nm thickness (AVT = 65% when omitting the top electrode). In a second step, the optimized ternary blend was used of the fabrication of SM-OSCs. We used a plastic/metal substrate with a light transmission of over 90% as a transparent electrode that was applied via a lamination process. The interfacial layer between the active layer and the top electrode was optimized in order to improve the charge collection and the contact with the laminated top electrode. We demonstrated a PCE of 3% with AVT of 51%. The parameter space for ternary OSCs is large and it is difficult to find the best concentration ratios by trial and error. A rational approach for device optimization is the construction of a ternary blend phase diagram. We discuss our attempts to construct such a phase diagram for the PBDTTT-C: Cy7T: PC70BM system via a combination of using selective Cy7T selective solvents and atomic force microscopy. From the ternary diagram suitable morphologies for efficient light-to-current conversion can be identified. We compare experimental OSC data with these predictions.Keywords: organic photovoltaics, ternary phase diagram, ternary organic solar cells, transparent solar cell, lamination
Procedia PDF Downloads 26359 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics
Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca
Abstract:
The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.Keywords: adulteration, multivariate analysis, potential functions, regression
Procedia PDF Downloads 12558 Evaluation of Correct Usage, Comfort and Fit of Personal Protective Equipment in Construction Work
Authors: Anna-Lisa Osvalder, Jonas Borell
Abstract:
There are several reasons behind the use, non-use, or inadequate use of personal protective equipment (PPE) in the construction industry. Comfort and accurate size support proper use, while discomfort, misfit, and difficulties to understand how the PPEs should be handled inhibit correct usage. The need for several protective equipments simultaneously might also create problems. The purpose of this study was to analyse the correct usage, comfort, and fit of different types of PPEs used for construction work. Correct usage was analysed as guessability, i.e., human perceptions of how to don, adjust, use, and doff the equipment, and if used as intended. The PPEs tested individually or in combinations were a helmet, ear protectors, goggles, respiratory masks, gloves, protective cloths, and safety harnesses. First, an analytical evaluation was performed with ECW (enhanced cognitive walkthrough) and PUEA (predictive use error analysis) to search for usability problems and use errors during handling and use. Then usability tests were conducted to evaluate guessability, comfort, and fit with 10 test subjects of different heights and body constitutions. The tests included observations during donning, five different outdoor work tasks, and doffing. The think-aloud method, short interviews, and subjective estimations were performed. The analytical evaluation showed that some usability problems and use errors arise during donning and doffing, but with minor severity, mostly causing discomfort. A few use errors and usability problems arose for the safety harness, especially for novices, where some could lead to a high risk of severe incidents. The usability tests showed that discomfort arose for all test subjects when using a combination of PPEs, increasing over time. For instance, goggles, together with the face mask, caused pressure, chafing at the nose, and heat rash on the face. This combination also limited sight of vision. The helmet, in combination with the goggles and ear protectors, did not fit well and caused uncomfortable pressure at the temples. No major problems were found with the individual fit of the PPEs. The ear protectors, goggles, and face masks could be adjusted for different head sizes. The guessability for how to don and wear the combination of PPE was moderate, but it took some time to adjust them for a good fit. The guessability was poor for the safety harness; few clues in the design showed how it should be donned, adjusted, or worn on the skeletal bones. Discomfort occurred when the straps were tightened too much. All straps could not be adjusted for somebody's constitutions leading to non-optimal safety. To conclude, if several types of PPEs are used together, discomfort leading to pain is likely to occur over time, which can lead to misuse, non-use, or reduced performance. If people who are not regular users should wear a safety harness correctly, the design needs to be improved for easier interpretation, correct position of the straps, and increased possibilities for individual adjustments. The results from this study can be a base for re-design ideas for PPE, especially when they should be used in combinations.Keywords: construction work, PPE, personal protective equipment, misuse, guessability, usability
Procedia PDF Downloads 8757 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 195