Search results for: Average Power dissipation
467 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 16466 A Qualitative Exploration of the Sexual and Reproductive Health Practices of Adolescent Mothers from Indigenous Populations in Ratanak Kiri Province, Cambodia
Authors: Bridget J. Kenny, Elizabeth Hoban, Jo Williams
Abstract:
Adolescent pregnancy presents a significant public health challenge for Cambodia. Despite declines in the overall fertility rate, the adolescent fertility rate is increasing. Adolescent pregnancy is particularly problematic in the Northeast provinces of Ratanak Kiri and Mondul Kiri where 34 percent of girls aged between 15 and 19 have begun childbearing; this is almost three times Cambodia’s national average of 12 percent. Language, cultural and geographic barriers have restricted qualitative exploration of the sexual and reproductive health (SRH) challenges that face indigenous adolescents in Northeast Cambodia. The current study sought to address this gap by exploring the SRH practices of adolescent mothers from indigenous populations in Ratanak Kiri Province. Twenty-two adolescent mothers, aged between 15 and 19, were recruited from seven indigenous villages in Ratanak Kiri Province and asked to participate in a combined body mapping exercise and semi-structured interview. Participants were given a large piece of paper (59.4 x 84.1 cm) with the outline of a female body and asked to draw the female reproductive organs onto the ‘body map’. Participants were encouraged to explain what they had drawn with the purpose of evoking conversation about their reproductive bodies. Adolescent mothers were then invited to participate in a semi-structured interview to further expand on topics of SRH. The qualitative approach offered an excellent avenue to explore the unique SRH challenges that face indigenous adolescents in rural Cambodia. In particular, the use of visual data collection methods reduced the language and cultural barriers that have previously restricted or prevented qualitative exploration of this population group. Thematic analysis yielded six major themes: (1) understanding of the female reproductive body, (2) contraceptive knowledge, (3) contraceptive use, (4) barriers to contraceptive use, (5) sexual practices, (6) contact with healthcare facilities. Participants could name several modern contraceptive methods and knew where they could access family planning services. However, adolescent mothers explained that they gained this knowledge during antenatal care visits and consequently participants had limited SRH knowledge, including contraceptive awareness, at the time of sexual initiation. Fear of the perceived side effects of modern contraception, including infertility, provided an additional barrier to contraceptive use for indigenous adolescents. Participants did not cite cost or geographic isolation as barriers to accessing SRH services. Child marriage and early sexual initiation were also identified as important factors contributing to the high prevalence of adolescent pregnancy in this population group. The findings support the Ministry of Education, Youth and Sports' (MoEYS) recent introduction of SRH education into the primary and secondary school curriculum but suggest indigenous girls in rural Cambodia require additional sources of SRH information. Results indicate adolescent girls’ first point of contact with healthcare facilities occurs after they become pregnant. Promotion of an effective continuum of care by increasing access to healthcare services during the pre-pregnancy period is suggested as a means of providing adolescents girls with an additional avenue to acquire SRH information.Keywords: adolescent pregnancy, contraceptive use, family planning, sexual and reproductive health
Procedia PDF Downloads 111465 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks
Authors: Nafisa Mahbub, Hajo Ribberink
Abstract:
Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger
Procedia PDF Downloads 50464 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 111463 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions
Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos
Abstract:
The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon
Procedia PDF Downloads 141462 Portuguese Teachers in Bilingual Schools in Brazil: Professional Identities and Intercultural Conflicts
Authors: Antonieta Heyden Megale
Abstract:
With the advent of globalization, the social, cultural and linguistic situation of the whole world has changed. In this scenario, the teaching of English, in Brazil, has become a booming business and the belief that this language is essential to a successful life is played by the media that sees it as a commodity and spares no effort to sell it. In this context, it has become evident the growth of bilingual and international schools that have English and Portuguese as languages of instruction. According to federal legislation, all schools in the country must follow the Curriculum guidelines proposed by the Ministry of Education of Brazil. It is then mandatory that, in addition to the specific foreign curriculum an international school subscribes to, it must also teach all subjects of the official minimum curriculum and these subjects have to be taught in Portuguese. It is important to emphasize that, in these schools, English is the most prestigious language. Therefore, firstly, Brazilian teachers who teach Portuguese in such contexts find themselves in a situation in which they teach in a low-status language. Secondly, because such teachers’ actions are guided by a different cultural matrix, which differs considerably from Anglo-Saxon values and beliefs, they often experience intercultural conflict in their workplace. Taking it consideration, this research, focusing on the trajectories of a specific group of Brazilian teachers of Portuguese in international and bilingual schools located in the city of São Paulo, intends to analyze how they discursively represent their own professional identities and practices. More specifically the objectives of this research are to understand, from the perspective of the investigated teachers, how they (i) rebuilt narratively their professional careers and explain the factors that led them to an international or to an immersion bilingual school; (ii) position themselves with respect to their linguistic repertoire; (iii) interpret the intercultural practices they are involved with in school and (v) position themselves by foregrounding categories to determine their membership in the group of Portuguese teachers. We have worked with these teachers’ autobiographical narratives. The autobiographical approach assumes that the stories told by teachers are systems of meaning involved in the production of identities and subjectivities in the context of power relations. The teachers' narratives were elicited by the following trigger: "I would like you to tell me how you became a teacher in a bilingual/international school and what your impressions are about your work and about the context in which it is inserted". These narratives were produced orally, recorded, and transcribed for analysis. The teachers were also invited to draw their "linguistic portraits". The theoretical concepts of positioning and the indexical cues were taken into consideration in data analysis. The narratives produced by the teachers point to intercultural conflicts related to their expectations and representations of others, which are never neutral or objective truths but discursive constructions.Keywords: bilingual schools, identity, interculturality, narrative
Procedia PDF Downloads 336461 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 84460 Preschoolers’ Selective Trust in Moral Promises
Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu
Abstract:
Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.Keywords: promise, trust, moral judgement, preschoolers
Procedia PDF Downloads 53459 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer
Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang
Abstract:
In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide
Procedia PDF Downloads 315458 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 60457 Intelligent Indoor Localization Using WLAN Fingerprinting
Authors: Gideon C. Joseph
Abstract:
The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression
Procedia PDF Downloads 346456 Violence against Women: A Study on the Aggressors' Profile
Authors: Giovana Privatte Maciera, Jair Izaías Kappann
Abstract:
Introduction: The violence against woman is a complex phenomenon that accompanies the woman throughout her life and is a result of a social, cultural, political and religious construction, based on the differences among the genders. Those differences are felt, mainly, because of the patriarchal system that is still present which just naturalize and legitimate the asymmetry of power. As consequence of the women’s lasting historical and collective effort for a legislation against the impunity of violence against women in the national scenery, it was ordained, in 2006, a law known as Maria da Penha. The law was created as a protective measure for women that were victims of violence and consequently for the punishment of the aggressor. Methodology: Analysis of police inquiries is established by the Police Station of Defense of the Woman of Assis city, by formal authorization of the justice, in the period of 2013 to 2015. For the evaluating of the results will be used the content analysis and the theoretical referential of Psychoanalysis. Results and Discussion: The final analysis of the inquiries demonstrated that the violence against women is reproduced by the society and the aggressor, in most cases it is a member of their own family, mainly the current or former-spouse. The most common kinds of aggression were: the threat bodily harm, and the physical violence, that normally happens accompanied by psychological violence, being the most painful for the victims. The biggest part of the aggressors was white, older than the victim, worker and had primary school. But, unlike the expected, the minority of the aggressors were users of alcohol and/or drugs and possessed children in common with the victim. There is a contrast among the number of victims who already admitted have suffered some type of violence earlier by the same aggressor and the number of victims who has registered the occurrence before. The aggressors often use the discourse of denial in their testimony or try to justify their act like the blame was of the victim. It is believed in the interaction of several factors that can influence the aggressor to commit the abuse, including psychological, personal and sociocultural factors. One hypothesis is that the aggressor has a violence history in the family origin. After the aggressor being judged, condemned or not, usually there is no rehabilitation plan or supervision that enable his change. Conclusions: It has noticed the importance of studying the aggressor’s characteristics and the reasons that took him to commit such violence, making possible the implementation of an appropriate treatment to prevent and reduce the aggressions, as well the creation of programs and actions that enable communication and understanding concerning the theme. This is because the recurrence is still high, since the punitive system is not enough and the law is still ineffective and inefficient in certain aspects and in its own functioning. It is perceived a compulsion in repeat so much for the victims as for the aggressors, because they end involving, almost always, in disturbed and violent relationships, with the relation of subordination-dominance as characteristic.Keywords: aggressors' profile, gender equality, Maria da Penha law, violence against women
Procedia PDF Downloads 333455 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives
Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes
Abstract:
The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system
Procedia PDF Downloads 118454 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics
Authors: Mark Wiebe
Abstract:
One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.Keywords: theology, philosophy of religion, metaphysics, philosophy
Procedia PDF Downloads 71453 Influence of Thermal Annealing on Phase Composition and Structure of Quartz-Sericite Minerale
Authors: Atabaev I. G., Fayziev Sh. A., Irmatova Sh. K.
Abstract:
Raw materials with high content of Kalium oxide widely used in ceramic technology for prevention or decreasing of deformation of ceramic goods during drying process and under thermal annealing. Becouse to low melting temperature it is also used to decreasing of the temperature of thermal annealing during fabrication of ceramic goods [1,2]. So called “Porceline or China stones” - quartz-sericite (muscovite) minerals is also can be used for prevention of deformation as the content of Kalium oxide in muscovite is rather high (SiO2, + KAl2[AlSi3O10](OH)2). [3] . To estimation of possibility of use of this mineral for ceramic manufacture, in the presented article the influence of thermal processing on phase and a chemical content of this raw material is investigated. As well as to other ceramic raw materials (kaoline, white burning clays) the basic requirements of the industry to quality of "a porcelain stone» are following: small size of particles, relative high uniformity of disrtribution of components and phase, white color after burning, small content of colorant oxides or chromophores (Fe2O3, FeO, TiO2, etc) [4,5]. In the presented work natural minerale from the Boynaksay deposit (Uzbekistan) is investigated. The samples was mechanically polished for investigation by Scanning Electron Microscope. Powder with size of particle up to 63 μm was used to X-ray diffractometry and chemical analysis. The annealing of samples was performed at 900, 1120, 1350oC during 1 hour. Chemical composition of Boynaksay raw material according to chemical analysis presented in the table 1. For comparison the composition of raw materials from Russia and USA are also presented. In the Boynaksay quartz – sericite the average parity of quartz and sericite makes 55-60 and 30-35 % accordingly. The distribution of quartz and sericite phases in raw material was investigated using electron probe scanning electronic microscope «JEOL» JXA-8800R. In the figure 1 the scanning electron microscope (SEM) micrograps of the surface and the distributions of Al, Si and K atoms in the sample are presented. As it seen small granular, white and dense mineral includes quartz, sericite and small content of impurity minerals. Basically, crystals of quartz have the sizes from 80 up to 500 μm. Between quartz crystals the sericite inclusions having a tablet form with radiant structure are located. The size of sericite crystals is ~ 40-250 μm. Using data on interplanar distance [6,7] and ASTM Powder X-ray Diffraction Data it is shown that natural «a porcelain stone» quartz – sericite consists the quartz SiO2, sericite (muscovite type) KAl2[AlSi3O10](OH)2 and kaolinite Al203SiO22Н2О (See Figure 2 and Table 2). As it seen in the figure 3 and table 3a after annealing at 900oC the quartz – sericite contains quartz – SiO2 and muscovite - KAl2[AlSi3O10](OH)2, the peaks related with Kaolinite are absent. After annealing at 1120oC the full disintegration of muscovite and formation of mullite phase Al203 SiO2 is observed (the weak peaks of mullite appears in fig 3b and table 3b). After annealing at 1350oC the samples contains crystal phase of quartz and mullite (figure 3c and table 3с). Well known Mullite gives to ceramics high density, abrasive and chemical stability. Thus the obtained experimental data on formation of various phases during thermal annealing can be used for development of fabrication technology of advanced materials. Conclusion: The influence of thermal annealing in the interval 900-1350oC on phase composition and structure of quartz-sericite minerale is investigated. It is shown that during annealing the phase content of raw material is changed. After annealing at 1350oC the samples contains crystal phase of quartz and mullite (which gives gives to ceramics high density, abrasive and chemical stability).Keywords: quartz-sericite, kaolinite, mullite, thermal processing
Procedia PDF Downloads 412452 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers
Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal
Abstract:
Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test
Procedia PDF Downloads 97451 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock
Authors: Vahid Bairami Rad
Abstract:
The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno
Procedia PDF Downloads 55450 Nuclear Near Misses and Their Learning for Healthcare
Authors: Nick Woodier, Iain Moppett
Abstract:
Background: It is estimated that one in ten patients admitted to hospital will suffer an adverse event in their care. While the majority of these will result in low harm, patients are being significantly harmed by the processes meant to help them. Healthcare, therefore, seeks to make improvements in patient safety by taking learning from other industries that are perceived to be more mature in their management of safety events. Of particular interest to healthcare are ‘near misses,’ those events that almost happened but for an intervention. Healthcare does not have any guidance as to how best to manage and learn from near misses to reduce the chances of harm to patients. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from the UK nuclear sector to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. The nuclear sector was chosen as an exemplar due to its status as an ultra-safe industry. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, scenario discussion, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how nuclear manages near misses with a focus on defining them and clarifying how best to support reporting and analysis to extract learning. Near misses related to radiation release or exposure were focused on. Results: Eightnuclear interviews contributed to the GT across nuclear power, decommissioning, weapons, and propulsion. The scoping review identified 83 articles across a range of safety-critical industries, with only six focused on nuclear. The GT identified that nuclear has a particular focus on precursors and low-level events, with regulation supporting their management. Exploration of definitions led to the recognition of the importance of several interventions in a sequence of events, but that do not solely rely on humans as these cannot be assumed to be robust barriers. Regarding reporting and analysis, no consistent methods were identified, but for learning, the role of operating experience learning groups was identified as an exemplar. The safety culture across nuclear, however, was heard to vary, which undermined reporting of near misses and other safety events. Some parts of the industry described that their focus on near misses is new and that despite potential risks existing, progress to mitigate hazards is slow. Conclusions: Healthcare often sees ‘nuclear,’ as well as other ultra-safe industries such as ‘aviation,’ as homogenous. However, the findings here suggest significant differences in safety culture and maturity across various parts of the nuclear sector. Healthcare can take learning from some aspects of management of near misses in nuclear, such as how they are defined and how learning is shared through operating experience networks. However, healthcare also needs to recognise that variability exists across industries, and comparably, it may be more mature in some areas of safety.Keywords: culture, definitions, near miss, nuclear safety, patient safety
Procedia PDF Downloads 103449 Graphic Narratives: Representations of Refugeehood in the Form of Illustration
Authors: Pauline Blanchet
Abstract:
In a world where images are a prominent part of our daily lives and a way of absorbing information, the analysis of the representation of migration narratives is vital. This thesis raises questions concerning the power of illustrations, drawings and visual culture in order to represent the migration narratives in the age of Instagram. The rise of graphic novels and comics has come about in the last fifteen years, specifically regarding contemporary authors engaging with complex social issues such as migration and refugeehood. Due to this, refugee subjects are often in these narratives, whether they are autobiographical stories or whether the subject is included in the creative process. Growth in discourse around migration has been present in other art forms; in 2018, there has been dedicated exhibitions around migration such as Tania Bruguera at the TATE (2018-2019), ‘Journeys Drawn’ at the House of Illustration (2018-2019) and dedicated film festivals (2018; the Migration Film Festival), which have shown the recent considerations of using the arts as a medium of expression regarding themes of refugeehood and migration. Graphic visuals are fast becoming a key instrument when representing migration, and the central thesis of this paper is to show the strength and limitations of this form as well the methodology used by the actors in the production process. Recent works which have been released in the last ten years have not being analysed in the same context as previous graphic novels such as Palestine and Persepolis. While a lot of research has been done on the mass media portrayals of refugees in photography and journalism, there is a lack of literature on the representation with illustrations. There is little research about the accessibility of graphic novels such as where they can be found and what the intentions are when writing the novels. It is interesting to see why these authors, NGOs, and curators have decided to highlight these migrant narratives in a time when the mainstream media has done extensive coverage on the ‘refugee crisis’. Using primary data by doing one on one interviews with artists, curators, and NGOs, this paper investigates the efficiency of graphic novels for depicting refugee stories as a viable alternative to other mass medium forms. The paper has been divided into two distinct sections. The first part is concerned with the form of the comic itself and how it either limits or strengthens the representation of migrant narratives. This will involve analysing the layered and complex forms that comics allow such as multimedia pieces, use of photography and forms of symbolism. It will also show how the illustration allows for anonymity of refugees, the empathetic aspect of the form and how the history of the graphic novel form has allowed space for positive representations of women in the last decade. The second section will analyse the creative and methodological process which takes place by the actors and their involvement with the production of the works.Keywords: graphic novel, refugee, communication, media, migration
Procedia PDF Downloads 112448 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes
Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter
Abstract:
Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.Keywords: dispersal, evidence, propeller, UAV
Procedia PDF Downloads 162447 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle
Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha
Abstract:
An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe
Procedia PDF Downloads 238446 The Composition of Biooil during Biomass Pyrolysis at Various Temperatures
Authors: Zoltan Sebestyen, Eszter Barta-Rajnai, Emma Jakab, Zsuzsanna Czegeny
Abstract:
Extraction of the energy content of lignocellulosic biomass is one of the possible pathways to reduce the greenhouse gas emission derived from the burning of the fossil fuels. The application of the bioenergy can mitigate the energy dependency of a country from the foreign natural gas and the petroleum. The diversity of the plant materials makes difficult the utilization of the raw biomass in power plants. This problem can be overcome by the application of thermochemical techniques. Pyrolysis is the thermal decomposition of the raw materials under inert atmosphere at high temperatures, which produces pyrolysis gas, biooil and charcoal. The energy content of these products can be exploited by further utilization. The differences in the chemical and physical properties of the raw biomass materials can be reduced by the use of torrefaction. Torrefaction is a promising mild thermal pretreatment method performed at temperatures between 200 and 300 °C in an inert atmosphere. The goal of the pretreatment from a chemical point of view is the removal of water and the acidic groups of hemicelluloses or the whole hemicellulose fraction with minor degradation of cellulose and lignin in the biomass. Thus, the stability of biomass against biodegradation increases, while its energy density increases. The volume of the raw materials decreases so the expenses of the transportation and the storage are reduced as well. Biooil is the major product during pyrolysis and an important by-product during torrefaction of biomass. The composition of biooil mostly depends on the quality of the raw materials and the applied temperature. In this work, thermoanalytical techniques have been used to study the qualitative and quantitative composition of the pyrolysis and torrefaction oils of a woody (black locust) and two herbaceous samples (rape straw and wheat straw). The biooil contains C5 and C6 anhydrosugar molecules, as well as aromatic compounds originating from hemicellulose, cellulose, and lignin, respectively. In this study, special emphasis was placed on the formation of the lignin monomeric products. The structure of the lignin fraction is different in the wood and in the herbaceous plants. According to the thermoanalytical studies the decomposition of lignin starts above 200 °C and ends at about 500 °C. The lignin monomers are present among the components of the torrefaction oil even at relatively low temperatures. We established that the concentration and the composition of the lignin products vary significantly with the applied temperature indicating that different decomposition mechanisms dominate at low and high temperatures. The evolutions of decomposition products as well as the thermal stability of the samples were measured by thermogravimetry/mass spectrometry (TG/MS). The differences in the structure of the lignin products of woody and herbaceous samples were characterized by the method of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS). As a statistical method, principal component analysis (PCA) has been used to find correlation between the composition of lignin products of the biooil and the applied temperatures.Keywords: pyrolysis, torrefaction, biooil, lignin
Procedia PDF Downloads 327445 Investigation of Mass Transfer for RPB Distillation at High Pressure
Authors: Amiza Surmi, Azmi Shariff, Sow Mun Serene Lock
Abstract:
In recent decades, there has been a significant emphasis on the pivotal role of Rotating Packed Beds (RPBs) in absorption processes, encompassing the removal of Volatile Organic Compounds (VOCs) from groundwater, deaeration, CO2 absorption, desulfurization, and similar critical applications. The primary focus is elevating mass transfer rates, enhancing separation efficiency, curbing power consumption, and mitigating pressure drops. Additionally, substantial efforts have been invested in exploring the adaptation of RPB technology for offshore deployment. This comprehensive study delves into the intricacies of nitrogen removal under low temperature and high-pressure conditions, employing the high gravity principle via innovative RPB distillation concept with a specific emphasis on optimizing mass transfer. Based on the author's knowledge and comprehensive research, no cryogenic experimental testing was conducted to remove nitrogen via RPB. The research identifies pivotal process control factors through meticulous experimental testing, with pressure, reflux ratio, and reboil ratio emerging as critical determinants in achieving the desired separation performance. The results are remarkable, with nitrogen removal reaching less than one mole% in the Liquefied Natural Gas (LNG) product and less than three moles% methane in the nitrogen-rich gas stream. The study further unveils the mass transfer coefficient, revealing a noteworthy trend of decreasing Number of Transfer Units (NTU) and Area of Transfer Units (ATU) as the rotational speed escalates. Notably, the condenser and reboiler impose varying demands based on the operating pressure, with lower pressures at 12 bar requiring a more substantial duty than the 15-bar operation of the RPB. In pursuit of optimal energy efficiency, a meticulous sensitivity analysis is conducted, pinpointing the ideal combination of pressure and rotating speed that minimizes overall energy consumption. These findings underscore the efficiency of the RPB distillation approach in effecting efficient separation, even when operating under the challenging conditions of low temperature and high pressure. This achievement is attributed to a rigorous process control framework that diligently manages the operational pressure and temperature profile of the RPB. Nonetheless, the study's conclusions point towards the need for further research to address potential scaling challenges and associated risks, paving the way for the industrial implementation of this transformative technology.Keywords: mass transfer coefficient, nitrogen removal, liquefaction, rotating packed bed
Procedia PDF Downloads 49444 Dual-Layer Microporous Layer of Gas Diffusion Layer for Proton Exchange Membrane Fuel Cells under Various RH Conditions
Authors: Grigoria Athanasaki, Veerarajan Vimala, A. M. Kannan, Louis Cindrella
Abstract:
Energy usage has been increased throughout the years, leading to severe environmental impacts. Since the majority of the energy is currently produced from fossil fuels, there is a global need for clean energy solutions. Proton Exchange Membrane Fuel Cells (PEMFCs) offer a very promising solution for transportation applications because of their solid configuration and low temperature operations, which allows them to start quickly. One of the main components of PEMFCs is the Gas Diffusion Layer (GDL), which manages water and gas transport and shows direct influence on the fuel cell performance. In this work, a novel dual-layer GDL with gradient porosity was prepared, using polyethylene glycol (PEG) as pore former, to improve the gas diffusion and water management in the system. The microporous layer (MPL) of the fabricated GDL consists of carbon powder PUREBLACK, sodium dodecyl sulfate as a surfactant, 34% wt. PTFE and the gradient porosity was created by applying one layer using 30% wt. PEG on the carbon substrate, followed by a second layer without using any pore former. The total carbon loading of the microporous layer is ~ 3 mg.cm-2. For the assembly of the catalyst layer, Nafion membrane (Ion Power, Nafion Membrane NR211) and Pt/C electrocatalyst (46.1% wt.) were used. The catalyst ink was deposited on the membrane via microspraying technique. The Pt loading is ~ 0.4 mg.cm-2, and the active area is 5 cm2. The sample was ex-situ characterized via wetting angle measurement, Scanning Electron Microscopy (SEM), and Pore Size Distribution (PSD) to evaluate its characteristics. Furthermore, for the performance evaluation in-situ characterization via Fuel Cell Testing using H2/O2 and H2/air as reactants, under 50, 60, 80, and 100% relative humidity (RH), took place. The results were compared to a single layer GDL, fabricated with the same carbon powder and loading as the dual layer GDL, and a commercially available GDL with MPL (AvCarb2120). The findings reveal high hydrophobic properties of the microporous layer of the GDL for both PUREBLACK based samples, while the commercial GDL demonstrates hydrophilic behavior. The dual layer GDL shows high and stable fuel cell performance under all the RH conditions, whereas the single layer manifests a drop in performance at high RH in both oxygen and air, caused by catalyst flooding. The commercial GDL shows very low and unstable performance, possibly because of its hydrophilic character and thinner microporous layer. In conclusion, the dual layer GDL with PEG appears to have improved gas diffusion and water management in the fuel cell system. Due to its increasing porosity from the catalyst layer to the carbon substrate, it allows easier access of the reactant gases from the flow channels to the catalyst layer, and more efficient water removal from the catalyst layer, leading to higher performance and stability.Keywords: gas diffusion layer, microporous layer, proton exchange membrane fuel cells, relative humidity
Procedia PDF Downloads 121443 Civic E-Participation in Central and Eastern Europe: A Comparative Analysis
Authors: Izabela Kapsa
Abstract:
Civic participation is an important aspect of democracy. The contemporary model of democracy is based on citizens' participation in political decision-making (deliberative democracy, participatory democracy). This participation takes many forms of activities like display of slogans and symbols, voting, social consultations, political demonstrations, membership in political parties or organizing civil disobedience. The countries of Central and Eastern Europe after 1989 are characterized by great social, economic and political diversity. Civil society is also part of the process of democratization. Civil society, funded by the rule of law, civil rights, such as freedom of speech and association and private ownership, was to play a central role in the development of liberal democracy. Among the many interpretations of concepts, defining the concept of contemporary democracy, one can assume that the terms civil society and democracy, although different in meaning, nowadays overlap. In the post-communist countries, the process of shaping and maturing societies took place in the context of a struggle with a state governed by undemocratic power. State fraud or repudiation of the institution is a representative state, which in the past was the only way to manifest and defend its identity, but after the breakthrough became one of the main obstacles to the development of civil society. In Central and Eastern Europe, there are many obstacles to the development of civil society, for example, the elimination of economic poverty, the implementation of educational campaigns, consciousness-related obstacles, the formation of social capital and the deficit of social activity. Obviously, civil society does not only entail an electoral turnout but a broader participation in the decision-making process, which is impossible without direct and participative democratic institutions. This article considers such broad forms of civic participation and their characteristics in Central and Eastern Europe. The paper is attempts to analyze the functioning of electronic forms of civic participation in Central and Eastern European states. This is not accompanied by a referendum or a referendum initiative, and other forms of political participation, such as public consultations, participative budgets, or e-Government. However, this paper will broadly present electronic administration tools, the application of which results from both legal regulations and increasingly common practice in state and city management. In the comparative analysis, the experiences of post-communist bloc countries will be summed up to indicate the challenges and possible goals for further development of this form of citizen participation in the political process. The author argues that for to function efficiently and effectively, states need to involve their citizens in the political decision-making process, especially with the use of electronic tools.Keywords: Central and Eastern Europe, e-participation, e-government, post-communism
Procedia PDF Downloads 193442 On-Ice Force-Velocity Modeling Technical Considerations
Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra
Abstract:
Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.Keywords: ice-hockey, sprint, skating, power
Procedia PDF Downloads 98441 Arbuscular Mycorrhizal Symbiosis Modulates Antioxidant Capacity of in vitro Propagated Hyssop, Hyssopus officinalis L.
Authors: Maria P. Geneva, Ira V. Stancheva, Marieta G. Hristozkova, Roumiana D. Vasilevska-Ivanova, Mariana T. Sichanova, Janet R. Mincheva
Abstract:
Hyssopus officinalis L., Lamiaceae, commonly called hyssop, is an aromatic, semi-evergreen, woody-based, shrubby perennial plant. Hyssop is a good expectorant and antiviral herb commonly used to treat respiratory conditions such as influenza, sinus infections, colds, and bronchitis. Most of its medicinal properties are attributed to the essential oil of hyssop. The study was conducted to evaluate the influence of inoculation with arbuscular mycorrhizal fungi of in vitro propagated hyssop plants on the: activities of antioxidant enzymes superoxide dismutase, catalase, guaiacol peroxidase and ascorbate peroxidase; accumulation of non-enzymatic antioxidants total phenols and flavonoid, water-soluble soluble antioxidant metabolites expressed as ascorbic acid; the antioxidant potential of hyssop methanol extracts assessed by two common methods: free radical scavenging activity using free stable radical (2,2-diphenyl-1-picrylhydrazyl, DPPH• and ferric reducing antioxidant power FRAP in flowers and leaves. The successfully adapted to field conditions in vitro plants (survival rate 95%) were inoculated with arbuscular mycorrhizal fungi (Claroideoglomus claroideum, ref. EEZ 54). It was established that the activities of enzymes with antioxidant capacity (superoxide dismutase, catalase, guaiacol peroxidase and ascorbate peroxidase) were significantly higher in leaves than in flowers in both control and mycorrhized plants. In flowers and leaves of inoculated plants, the antioxidant enzymes activity were lower than in non-inoculated plants, only in SOD activity, there was no difference. The content of low molecular metabolites with antioxidant capacity as total phenols, total flavonoids, and water soluble antioxidants was higher in inoculated plants. There were no significant differences between control and inoculated plants both for FRAP and DPPH antioxidant activity. According to plant essential oil content, there was no difference between non-inoculated and inoculated plants. Based on our results we could suggest that antioxidant capacity of in vitro propagated hyssop plant under conditions of cultivation is determined by the phenolic compounds-total phenols and flavonoids as well as by the levels of water-soluble metabolites with antioxidant potential. Acknowledgments: This study was conducted with financial support from National Science Fund at the Bulgarian Ministry of Education and Science, Project DN06/7 17.12.16.Keywords: antioxidant enzymes, antioxidant metabolites, arbuscular mycorrhizal fungi, Hyssopus officinalis L.
Procedia PDF Downloads 325440 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 127439 Youth Participation in Peace Building and Development in Northern Uganda
Authors: Eric Awich Ochen
Abstract:
The end of the conflict in Northern Uganda in 2006 brought about an opportunity for the youth to return to their original home and contribute to the peace building and development process of their communities. Post-conflict is used here to refer to the post-armed conflict situation and activities of rebels of Joseph Kony in northern Uganda. While the rebels remain very much active in the Sudan and Central African Republic, in Uganda the last confrontations occurred around 2006 or earlier, and communities have returned to their homes and began the process of rebuilding their lives. It is argued that socio-economic reconstruction is at the heart of peacebuilding and sustenance of positive peace in the aftermath of conflict, as it has a bearing on post-conflict stability and good governance. We recognize that several post-conflict interventions within Northern Uganda have targeted women and children with a strong emphasis on family socio-economic empowerment and capacity building, including access to micro finance. The aim of this study was to examine the participation of the youth in post-conflict peace building and development in Northern Uganda by assessing the breadth and width of their engagement and the stages of programming cycle that they are involved in, interrogating the space for participation and how they are facilitating or constraining participation. It was further aimed at examining the various dimensions of participation at play in Northern Uganda and where this fits within the conceptual debates on peace building and development in the region. Supporting young people emerging out of protracted conflict to re-establish meaningful socio-economic engagements and livelihoods is fundamental to their participation in the affairs of the community. The study suggests that in the post-conflict development context of Northern Uganda, participation has rarely been disaggregated or differentiated by sectors or groups. Where some disaggregation occurs, then the main emphasis has always been on either women or children. It appears therefore that little meaningful space has thus been created for young people to engage and participate in peace building initiatives within the region. In other cases where some space is created for youth participation, this has been in pre-conceived programs or interventions conceived by the development organizations with the youth or young people only invited to participate at particular stages of the project implementation cycle. Still within the implementation of the intervention, the extent to which young people participate is bounded, with little power to influence the course of the interventions or make major decisions. It is thus visible that even here young people mainly validate and legitimize what are predetermined processes only act as pawns in the major chess games played by development actors (dominant peace building partners). This paper, therefore, concludes that the engagement of the youth in post-conflict peace building has been quite problematic and tokenistic and has not given the adequate youth space within which they could ably participate and express themselves in the ensuing interventions.Keywords: youth, conflict, peace building, participation
Procedia PDF Downloads 400438 Familiarity with Intercultural Conflicts and Global Work Performance: Testing a Theory of Recognition Primed Decision-Making
Authors: Thomas Rockstuhl, Kok Yee Ng, Guido Gianasso, Soon Ang
Abstract:
Two meta-analyses show that intercultural experience is not related to intercultural adaptation or performance in international assignments. These findings have prompted calls for a deeper grounding of research on international experience in the phenomenon of global work. Two issues, in particular, may limit current understanding of the relationship between international experience and global work performance. First, intercultural experience is too broad a construct that may not sufficiently capture the essence of global work, which to a large part involves sensemaking and managing intercultural conflicts. Second, the psychological mechanisms through which intercultural experience affects performance remains under-explored, resulting in a poor understanding of how experience is translated into learning and performance outcomes. Drawing on recognition primed decision-making (RPD) research, the current study advances a cognitive processing model to highlight the importance of intercultural conflict familiarity. Compared to intercultural experience, intercultural conflict familiarity is a more targeted construct that captures individuals’ previous exposure to dealing with intercultural conflicts. Drawing on RPD theory, we argue that individuals’ intercultural conflict familiarity enhances their ability to make accurate judgments and generate effective responses when intercultural conflicts arise. In turn, the ability to make accurate situation judgements and effective situation responses is an important predictor of global work performance. A relocation program within a multinational enterprise provided the context to test these hypotheses using a time-lagged, multi-source field study. Participants were 165 employees (46% female; with an average of 5 years of global work experience) from 42 countries who relocated from country to regional offices as part a global restructuring program. Within the first two weeks of transfer to the regional office, employees completed measures of their familiarity with intercultural conflicts, cultural intelligence, cognitive ability, and demographic information. They also completed an intercultural situational judgment test (iSJT) to assess their situation judgment and situation response. The iSJT comprised four validated multimedia vignettes of challenging intercultural work conflicts and prompted employees to provide protocols of their situation judgment and situation response. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded the quality of employee’s situation judgment and situation response. Three months later, supervisors rated employees’ global work performance. Results using multilevel modeling (vignettes nested within employees) support the hypotheses that greater familiarity with intercultural conflicts is positively associated with better situation judgment, and that situation judgment mediates the effect of intercultural familiarity on situation response quality. Also, aggregated situation judgment and situation response quality both predicted supervisor-rated global work performance. Theoretically, our findings highlight the important but under-explored role of familiarity with intercultural conflicts; a shift in attention from the general nature of international experience assessed in terms of number and length of overseas assignments. Also, our cognitive approach premised on RPD theory offers a new theoretical lens to understand the psychological mechanisms through which intercultural conflict familiarity affects global work performance. Third, and importantly, our study contributes to the global talent identification literature by demonstrating that the cognitive processes engaged in resolving intercultural conflicts predict actual performance in the global workplace.Keywords: intercultural conflict familiarity, job performance, judgment and decision making, situational judgment test
Procedia PDF Downloads 178