Search results for: continuous monitoring tool
364 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 144363 The Role of Law in the Transformation of Collective Identities in Nigeria
Authors: Henry Okechukwu Onyeiwu
Abstract:
Nigeria, with its rich tapestry of ethnicities, cultures, and religions, serves as a critical case study in understanding how law influences and shapes collective identities. This abstract delves into the historical context of legal systems in Nigeria, examining the colonial legacies that have influenced contemporary laws and how these laws interact with traditional practices and beliefs. This study examines the critical role of law in shaping and transforming collective identities in Nigeria, a nation characterized by its rich tapestry of ethnicities, cultures, and religions. The legal framework in Nigeria has evolved in response to historical, social, and political dynamics, influencing the way communities perceive themselves and interact with one another. This research highlights the interplay between law and collective identity, exploring how legal instruments, such as constitutions, statutes, and judicial rulings, have contributed to the formation, negotiation, and reformation of group identities over time. Moreover, contemporary legal debates surrounding issues such as citizenship, resource allocation, and communal conflicts further illustrate the law's role in identity formation. The legal recognition of different ethnic groups fosters a sense of belonging and collective identity among these groups, yet it simultaneously raises questions about inclusivity and equality. Laws concerning indigenous rights and affirmative action are essential in this discourse, as they reflect the necessity of balancing majority rule with minority rights—a challenge that Nigeria continues to navigate. By employing a multidisciplinary approach that integrates legal studies, sociology, and anthropology, the study analyses key historical milestones, such as colonial legal legacies, post-independence constitutional developments, and ongoing debates surrounding federalism and ethnic rights. It also investigates how laws affect social cohesion and conflict among Nigeria's diverse ethnic groups, as well as the role of law in promoting inclusivity and recognizing minority rights. Case studies are utilized to illustrate practical examples of legal transformations and their impact on collective identities in various Nigerian contexts, including land rights, religious freedoms, and ethnic representation in government. The findings reveal that while the law has the potential to unify disparate groups under a national identity, it can also exacerbate divisions when applied inequitably or favouring particular groups over others. Ultimately, this study aims to shed light on the dual nature of law as both a tool for transformation and a potential source of conflict in the evolution of collective identities in Nigeria. By understanding these dynamics, policymakers and legal practitioners can develop strategies to foster unity and respect for diversity in a complex societal landscape.Keywords: law, collective identity, Nigeria, ethnicity, conflict, inclusion, legal framework, transformation
Procedia PDF Downloads 28362 Gender Differences in the Impact and Subjective Interpretation of Childhood Sexual Abuse Survivors
Authors: T. Borja-Alvarez, V. Jiménez-Borja, M. Jiménez Borja, C. J. Jiménez-Mosquera
Abstract:
Research on child sexual abuse has predominantly focused on female survivors. This has resulted in less research looking at the particular context in which this abuse takes place for boys and the impact this abuse may have on male survivors. The aim of this study is to examine the sex and age of the perpetrators of child sexual abuse and explore gender differences in the impact along with the subjective interpretation that survivors attribute to these experiences. The data for this study was obtained from Ecuadorian university students (M = 230, F = 293) who reported sexual abuse using the ISPCAN Child Abuse Screening Tool Retrospective version (ICAST-R). Participants completed Horowitz's Impact of Event Scale (IES) and were also requested to choose among neutral, positive, and negative adjectives to describe these experiences. The results indicate that in the case of males, perpetrators were both males (adults =27%, peers =20%, relatives =10.3%, cousins =7.4%) and young females (girlfriends or ex-girlfriends =25.6%, neighborhood =20.7%, school =16.7%, cousins =15.3%, strangers =12.8%). In contrast, almost all females reported that adult males were the perpetrators (relatives =29.6%, neighborhood =11.9%, strangers =19.9%, family friends =9.7%). Regarding the impact of these events, significant gender differences emerged. More females (50%) than males (20%) presented symptoms of post-traumatic stress disorder (PTSD). Gender differences also surfaced in the way survivors interpret their experiences. Almost half of the male participants selected the word “consensual” followed by the words “normal”, “helped me to mature”, “shameful”, “confusing”, and “traumatic”. In contrast, almost all females chose the word “non-consensual” followed by the words “shameful”, “traumatic”, “scary”, and “confusing”. In conclusion, the findings of this study suggest that young females and adult males were the most common perpetrators of sexually abused boys whereas adult males were the most common perpetrators of sexually abused girls. The impact and the subjective interpretation of these experiences were more negative for girls than for boys. The factors that account for the gender differences in the impact and the interpretation of these experiences need further exploration. It is likely that the cultural expectations of sexual behaviors for boys and girls in Latin American societies may partially explain the differential impact in the way these childhood sexual abuse experiences are interpreted in adulthood. In Ecuador, as is the case in other Latin American countries, the machismo culture not only accepts but encourages early sexual behaviors in boys and negatively judges premature sexual behavior in females. The result of these different sexual expectations may be that sexually abused boys may re-define these experiences as “consensual” and “normal” in adulthood, even though these were not consensual at the time of occurrence. Future studies are needed to more deeply understand the different contexts of sexual abuse for boys and girls in order to analyze the long-term impact of these experiences.Keywords: abuse, child, gender differences, sexual
Procedia PDF Downloads 104361 TeleEmergency Medicine: Transforming Acute Care through Virtual Technology
Authors: Ashley L. Freeman, Jessica D. Watkins
Abstract:
TeleEmergency Medicine (TeleEM) is an innovative approach leveraging virtual technology to deliver specialized emergency medical care across diverse healthcare settings, including internal acute care and critical access hospitals, remote patient monitoring, and nurse triage escalation, in addition to external emergency departments, skilled nursing facilities, and community health centers. TeleEM represents a significant advancement in the delivery of emergency medical care, providing healthcare professionals the capability to deliver expertise that closely mirrors in-person emergency medicine, exceeding geographical boundaries. Through qualitative research, the extension of timely, high-quality care has proven to address the critical needs of patients in remote and underserved areas. TeleEM’s service design allows for the expansion of existing services and the establishment of new ones in diverse geographic locations. This ensures that healthcare institutions can readily scale and adapt services to evolving community requirements by leveraging on-demand (non-scheduled) telemedicine visits through the deployment of multiple video solutions. In terms of financial management, TeleEM currently employs billing suppression and subscription models to enhance accessibility for a wide range of healthcare facilities. Plans are in motion to transition to a billing system routing charges through a third-party vendor, further enhancing financial management flexibility. To address state licensure concerns, a patient location verification process has been integrated through legal counsel and compliance authorities' guidance. The TeleEM workflow is designed to terminate if the patient is not physically located within licensed regions at the time of the virtual connection, alleviating legal uncertainties. A distinctive and pivotal feature of TeleEM is the introduction of the TeleEmergency Medicine Care Team Assistant (TeleCTA) role. TeleCTAs collaborate closely with TeleEM Physicians, leading to enhanced service activation, streamlined coordination, and workflow and data efficiencies. In the last year, more than 800 TeleEM sessions have been conducted, of which 680 were initiated by internal acute care and critical access hospitals, as evidenced by quantitative research. Without this service, many of these cases would have necessitated patient transfers. Barriers to success were examined through thorough medical record review and data analysis, which identified inaccuracies in documentation leading to activation delays, limitations in billing capabilities, and data distortion, as well as the intricacies of managing varying workflows and device setups. TeleEM represents a transformative advancement in emergency medical care that nurtures collaboration and innovation. Not only has advanced the delivery of emergency medicine care virtual technology through focus group participation with key stakeholders, rigorous attention to legal and financial considerations, and the implementation of robust documentation tools and the TeleCTA role, but it’s also set the stage for overcoming geographic limitations. TeleEM assumes a notable position in the field of telemedicine by enhancing patient outcomes and expanding access to emergency medical care while mitigating licensure risks and ensuring compliant billing.Keywords: emergency medicine, TeleEM, rural healthcare, telemedicine
Procedia PDF Downloads 84360 Digital Advance Care Planning and Directives: Early Observations of Adoption Statistics and Responses from an All-Digital Consumer-Driven Approach
Authors: Robert L. Fine, Zhiyong Yang, Christy Spivey, Bonnie Boardman, Maureen Courtney
Abstract:
Importance: Barriers to traditional advance care planning (ACP) and advance directive (AD) creation have limited the promise of ACP/AD for individuals and families, the healthcare team, and society. Reengineering ACP by using a web-based, consumer-driven process has recently been suggested. We report early experience with such a process. Objective: Begin to analyze the potential of the creation and use of ACP/ADs as generated by a consumer-friendly, digital process by 1) assessing the likelihood that consumers would create ACP/ADs without structured intervention by medical or legal professionals, and 2) analyzing the responses to determine if the plans can help doctors better understand a person’s goals, preferences, and priorities for their medical treatments and the naming of healthcare agents. Design: The authors chose 900 users of MyDirectives.com, a digital ACP/AD tool, solely based on their state of residence in order to achieve proportional representation of all 50 states by population size and then reviewed their responses, summarizing these through descriptive statistics including treatment preferences, demographics, and revision of preferences. Setting: General United States population. Participants: The 900 participants had an average age of 50.8 years (SD = 16.6); 84.3% of the men and 91% of the women were in self-reported good health when signing their ADs. Main measures: Preferences regarding the use of life-sustaining treatments, where to spend final days, consulting a supportive and palliative care team, attempted cardiopulmonary resuscitation (CPR), autopsy, and organ and tissue donation. Results: Nearly 85% of respondents prefer cessation of life-sustaining treatments during their final days whenever those may be, 76% prefer to spend their final days at home or in a hospice facility, and 94% wanted their future doctors to consult a supportive and palliative care team. 70% would accept attempted CPR in certain limited circumstances. Most respondents would want an autopsy under certain conditions, and 62% would like to donate their organs. Conclusions and relevance: Analysis of early experience with an all-digital web-based ACP/AD platform demonstrates that individuals from a wide range of ages and conditions can engage in an interrogatory process about values, goals, preferences, and priorities for their medical treatments by developing advance directives and easily make changes to the AD created. Online creation, storage, and retrieval of advance directives has the potential to remove barriers to ACP/AD and, thus, to further improve patient-centered end-of-life care.Keywords: Advance Care Plan, Advance Decisions, Advance Directives, Consumer; Digital, End of Life Care, Goals, Living Wills, Prefences, Universal Advance Directive, Statements
Procedia PDF Downloads 327359 Knowledge of the Doctors Regarding International Patient Safety Goal
Authors: Fatima Saeed, Abdullah Mudassar
Abstract:
Introduction: Patient safety remains a global priority in the ever-evolving healthcare landscape. At the forefront of this endeavor are the International Patient Safety Goals (IPSGs), a standardized framework designed to mitigate risks and elevate the quality of care. Doctors, positioned as primary caregivers, wield a pivotal role in upholding and adhering to IPSGs, underscoring the critical significance of their knowledge and understanding of these goals. This research embarks on a comprehensive exploration into the depth of Doctors ' comprehension of IPSGs, aiming to unearth potential gaps and provide insights for targeted educational interventions. Established by influential healthcare bodies, including the World Health Organization (WHO), IPSGs represent a universally applicable set of objectives spanning crucial domains such as medication safety, infection control, surgical site safety, and patient identification. Adherence to these goals has exhibited substantial reductions in adverse events, fostering an overall enhancement in the quality of care. This study operates on the fundamental premise that an informed Doctors workforce is indispensable for effectively implementing IPSGs. A nuanced understanding of these goals empowers Doctors to identify potential risks, advocate for necessary changes, and actively contribute to a safety-centric culture within healthcare institutions. Despite the acknowledged importance of IPSGs, there is a growing concern that nurses may need more knowledge to integrate these goals into their practice seamlessly. Methodology: A Comprehensive research methodology covering study design, setting, duration, sample size determination, sampling technique, and data analysis. It introduces the philosophical framework guiding the research and details material, methods, and the analysis framework. The descriptive quantitative cross-sectional study in teaching care hospitals utilized convenient sampling over six months. Data collection involved written informed consent and questionnaires, analyzed with SPSS version 23, presenting results graphically and descriptively. The chapter ensures a clear understanding of the study's design, execution, and analytical processes. Result: The survey results reveal a substantial distribution across hospitals, with 34.52% in MTIKTH and 65.48% in HMC MTI. There is a notable prevalence of patient safety incidents, emphasizing the significance of adherence to IPSGs. Positive trends are observed, including 77.0% affirming the "time-out" procedure, 81.6% acknowledging effective healthcare provider communication, and high recognition (82.7%) of the purpose of IPSGs to improve patient safety. While the survey reflects a good understanding of IPSGs, areas for improvement are identified, suggesting opportunities for targeted interventions. Discussion: The study underscores the need for tailored care approaches and highlights the bio-socio-cultural context of 'contagion,' suggesting areas for further research amid antimicrobial resistance. Shifting the focus to patient safety practices, the survey chapter provides a detailed overview of results, emphasizing workplace distribution, patient safety incidents, and positive reflections on IPSGs. The findings indicate a positive trend in patient safety practices with areas for improvement, emphasizing the ongoing need for reinforcing safety protocols and cultivating a safety-centric culture in healthcare. Conclusion: In summary, the survey indicates a positive trend in patient safety practices with a good understanding of IPSGs among participants. However, identifying areas for potential improvement suggests opportunities for targeted interventions to enhance patient safety further. Ongoing efforts to reinforce adherence to safety protocols, address identified gaps, and foster a safety culture will contribute to continuous improvements in patient care and outcomes.Keywords: infection control, international patient safety, patient safety practices, proper medication
Procedia PDF Downloads 55358 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology
Authors: Hemendra Singh Rathod
Abstract:
Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.Keywords: frequency control, grid stability, li-ion battery storage, smart grid
Procedia PDF Downloads 152357 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 75356 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti
Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms
Abstract:
Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing
Procedia PDF Downloads 125355 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region
Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho
Abstract:
The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon
Procedia PDF Downloads 68354 A Clinical Audit on Screening Women with Subfertility Using Transvaginal Scan and Hysterosalpingo Contrast Sonography
Authors: Aarti M. Shetty, Estela Davoodi, Subrata Gangooly, Anita Rao-Coppisetty
Abstract:
Background: Testing Patency of Fallopian Tubes is among one of the several protocols for investigating Subfertile Couples. Both, Hysterosalpingogram (HSG) and Laparoscopy and dye test have been used as Tubal patency test for several years, with well-known limitation. Hysterosalpingo Contrast Sonography (HyCoSy) can be used as an alternative tool to HSG, to screen patency of Fallopian tubes, with an advantage of being non-ionising, and also, use of transvaginal scan to diagnose pelvic pathology. Aim: To determine the indication and analyse the performance of transvaginal scan and HyCoSy in Broomfield Hospital. Methods: We retrospectively analysed fertility workup of 282 women, who attended HyCoSy clinic at our institution from January 2015 to June 2016. An Audit proforma was designed, to aid data collection. Data was collected from patient notes and electronic records, which included patient demographics; age, parity, type of subfertility (primary or secondary), duration of subfertility, past medical history and base line investigation (hormone profile and semen analysis). Findings of the transvaginal scan, HyCoSy and Laparoscopy were also noted. Results: The most common indication for referral were as a part of primary fertility workup on couples who had failure to conceive despite intercourse for a year, other indication for referral were recurrent miscarriage, history of ectopic pregnancy, post reversal of sterilization(vasectomy and tuboplasty), Post Gynaecology surgery(Loop excision, cone biopsy) and amenorrhea. Basic Fertility workup showed 34% men had abnormal semen analysis. HyCoSy was successfully completed in 270 (95%) women using ExEm foam and Transvaginal Scan. Of the 270 patients, 535 tubes were examined in total. 495/535 (93%) tubes were reported as patent, 40/535 (7.5%) tubes were reported as blocked. A total of 17 (6.3%) patients required laparoscopy and dye test after HyCoSy. In these 17 patients, 32 tubes were examined under laparoscopy, and 21 tubes had findings similar to HyCoSy, with a concordance rate of 65%. In addition to this, 41 patients had some form of pelvic pathology (endometrial polyp, fibroid, cervical polyp, fibroid, bicornuate uterus) detected during transvaginal scan, who referred to corrective surgeries after attending HyCoSy Clinic. Conclusion: Our audit shows that HyCoSy and Transvaginal scan can be a reliable screening test for low risk women. Furthermore, it has competitive diagnostic accuracy to HSG in identifying tubal patency, with an additional advantage of screening for pelvic pathology. With addition of 3D Scan, pulse Doppler and other non-invasive imaging modality, HyCoSy may potentially replace Laparoscopy and chromopertubation in near future.Keywords: hysterosalpingo contrast sonography (HyCoSy), transvaginal scan, tubal infertility, tubal patency test
Procedia PDF Downloads 251353 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification
Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti
Abstract:
Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.Keywords: fluvial auto-classification concept, mapping, geomorphology, river
Procedia PDF Downloads 367352 The Digital Divide: Examining the Use and Access to E-Health Based Technologies by Millennials and Older Adults
Authors: Delana Theiventhiran, Wally J. Bartfay
Abstract:
Background and Significance: As the Internet is becoming the epitome of modern communications, there are many pragmatic reasons why the digital divide matters in terms of accessing and using E-health based technologies. With the rise of technology usage globally, those in the older adult generation may not be as familiar and comfortable with technology usage and are thus put at a disadvantage compared to other generations such as millennials when examining and using E-health based platforms and technology. Currently, little is known about how older adults and millennials access and use e-health based technologies. Methods: A systemic review of the literature was undertaken employing the following three databases: (i) PubMed, (ii) ERIC, and (iii) CINAHL; employing the search term 'digital divide and generations' to identify potential articles. To extract required data from the studies, a data abstraction tool was created to obtain the following information: (a) author, (b) year of publication, (c) sample size, (d) country of origin, (e) design/methods, (f) major findings/outcomes obtained. Inclusion criteria included publication dates between the years of Jan 2009 to Aug 2018, written in the English language, target populations of older adults aged 65 and above and millennials, and peer reviewed quantitative studies only. Major Findings: PubMed provided 505 potential articles, where 23 of those articles met the inclusion criteria. Specifically, ERIC provided 53 potential articles, where no articles met criteria following data extraction. CINAHL provided 14 potential articles, where eight articles met criteria following data extraction. Conclusion: Practically speaking, identifying how newer E-health based technologies can be integrated into society and identifying why there is a gap with digital technology will help reduce the impact on generations and individuals who are not as familiar with technology and Internet usage. The largest concern of all is how to prepare older adults for new and emerging E-health technologies. Currently, there is a dearth of literature in this area because it is a newer area of research and little is known about it. The benefits and consequences of technology being integrated into daily living are being investigated as a newer area of research. Several of the articles (N=11) indicated that age is one of the larger factors contributing to the digital divide. Similarly, many of the examined articles (N=5) identify that privacy concerns were one of the main deterrents of technology usage for elderly individuals aged 65 and above. The older adult generation feels that privacy is one of the major concerns, especially in regards to how data is collected, used and possibly sold to third party groups by various websites. Additionally, access to technology, the Internet, and infrastructure also plays a large part in the way that individuals are able to receive and use information. Lastly, a change in the way that healthcare is currently used, received and distributed would also help attribute to the change to ensure that no generation is left behind in a technologically advanced society.Keywords: digital divide, e-health, millennials, older adults
Procedia PDF Downloads 172351 Revolution through Rhythm: Anti Caste and Subaltern Dimensions in Indian Rap
Authors: Nithin Raj Adithyan
Abstract:
Rap music is a popular genre that features strong beats and rhythmic words. It was created by American disc jockeys and urban Black performers in the late 1970s. Additionally, it carries on West African oral traditions that were introduced to the Americas by Africans who were held as slaves and have impacted the narrative and rhythmic components of rap. Initially introduced in India in the late 1990s as mere entertainment, rap lacked the politicized undertones that it developed in the United States. However, recent years have witnessed a transformation, with Indian rap evolving into a vital tool for marginalized communities—particularly Dalits, Muslims, and tribal groups—to voice grievances against historical injustices, systemic discrimination, and caste-based oppression. This paper examines Indian rap's evolution into a potent medium for subaltern resistance, drawing from its origins in the black ghettos of America, where rap emerged as social commentary and an anti-racial political voice. Historically, music has served as an essential medium for subaltern groups in India to assert their identities and reclaim agency. Indian rap, in its current form, amplifies this function by offering a compelling platform to address issues of caste oppression, socio-economic marginalization, and symbolic exclusion. This paper examines how contemporary Indian rappers, often from Dalit and lower-caste backgrounds, leverage their art to confront systemic injustices and amplify voices that have historically been silenced. By analyzing key artists and their lyrics, this paper highlights the ways in which rap critiques the pervasive caste system, challenges social hierarchies, and fosters a sense of identity and solidarity among subaltern groups. This study uses Gayatri Spivak’s concept of “strategic essentialism” to explore how Indian rap fosters shared subaltern identity, uniting voices across regional and cultural divides. By situating Indian rap within the global hip-hop movement, the paper highlights how it contributes a unique perspective to global narratives of resilience and resistance, adapting international influences to articulate local struggles. Ultimately, this research highlights Indian rap’s role as a catalyst for change, examining its critique of caste violence, economic marginalization, and social exclusion and demonstrating how it contributes to the anti-caste movement. Through a close reading of this subaltern dimension of rap, the paper illuminates how Indian rap fosters identity, solidarity, and resistance, affirming the genre’s potential as a transformative force within the global legacy of hip-hop as an expression of subaltern agency and social dissent.Keywords: caste oppression, hip-hop/rap, resistance, subaltern
Procedia PDF Downloads 22350 A Sustainability Benchmarking Framework Based on the Life Cycle Sustainability Assessment: The Case of the Italian Ceramic District
Authors: A. M. Ferrari, L. Volpi, M. Pini, C. Siligardi, F. E. Garcia Muina, D. Settembre Blundo
Abstract:
A long tradition in the ceramic manufacturing since the 18th century, primarily due to the availability of raw materials and an efficient transport system, let to the birth and development of the Italian ceramic tiles district that nowadays represents a reference point for this sector even at global level. This economic growth has been coupled to attention towards environmental sustainability issues throughout various initiatives undertaken over the years at the level of the production sector, such as certification activities and sustainability policies. In this way, starting from an evaluation of the sustainability in all its aspects, the present work aims to develop a benchmarking helping both producers and consumers. In the present study, throughout the Life Cycle Sustainability Assessment (LCSA) framework, the sustainability has been assessed in all its dimensions: environmental with the Life Cycle Assessment (LCA), economic with the Life Cycle Costing (LCC) and social with the Social Life Cycle Assessment (S-LCA). The annual district production of stoneware tiles during the 2016 reference year has been taken as reference flow for all the three assessments, and the system boundaries cover the entire life cycle of the tiles, except for the LCC for which only the production costs have been considered at the moment. In addition, a preliminary method for the evaluation of local and indoor emissions has been introduced in order to assess the impact due to atmospheric emissions on both people living in the area surrounding the factories and workers. The Life Cycle Assessment results, obtained from IMPACT 2002+ modified assessment method, highlight that the manufacturing process is responsible for the main impact, especially because of atmospheric emissions at a local scale, followed by the distribution to end users, the installation and the ordinary maintenance of the tiles. With regard to the economic evaluation, both the internal and external costs have been considered. For the LCC, primary data from the analysis of the financial statements of Italian ceramic companies show that the higher cost items refer to expenses for goods and services and costs of human resources. The analysis of externalities with the EPS 2015dx method attributes the main damages to the distribution and installation of the tiles. The social dimension has been investigated with a preliminary approach by using the Social Hotspots Database, and the results indicate that the most affected damage categories are health and safety and labor rights and decent work. This study shows the potential of the LCSA framework applied to an industrial sector; in particular, it can be a useful tool for building a comprehensive benchmark for the sustainability of the ceramic industry, and it can help companies to actively integrate sustainability principles into their business models.Keywords: benchmarking, Italian ceramic industry, life cycle sustainability assessment, porcelain stoneware tiles
Procedia PDF Downloads 128349 General Evaluation of a Three-Year Holistic Physical Activity Interventions Program in Qatar Campuses: Step into Health (SIH) in Campuses 2013- 2016
Authors: Daniela Salih Khidir, Mohamed G. Al Kuwari, Mercia V. Walt, Izzeldin J. Ibrahim
Abstract:
Background: University-based physical activity interventions aim to establish durable social patterns during the transition to adulthood. This study is a comprehensive evaluation of a 3-year intervention-based program to increase the culture of physical activity (PA) routine in Qatar campuses community, using a holistic approach. Methodology: General assessment methods: formative evaluation-SIH Campuses logic model design, stakeholders’ identification; process evaluation-members’ step counts analyze and qualitative Appreciative Inquiry session (4-D model); daily steps categorized as: ≤5,000, inactive; 5,000-7,499 low active; ≥7,500, physically active; outcome evaluation - records 3 years interventions. Holistic PA interventions methods: walking interventions - pedometers distributions and walking competitions for students and staff; educational interventions - in campuses implementation of bilingual educational materials, lectures, video related to PA in prevention of non-communicable diseases (NCD); articles published online; monthly emails and sms notifications for pedometer use; mass media campaign - radio advertising, yearly pre/post press releases; community stakeholders interventions-biyearly planning/reporting/achievements rewarding/ qualitative meetings; continuous follow-up communication, biweekly steps reports. Findings: Results formative evaluation - SIH in Campuses logic model identified the need of PA awareness and education within universities, resources, activities, health benefits, program continuity. Results process evaluation: walking interventions: Phase 1: 5 universities recruited, 2352 members, 3 months competition; Phase 2: 6 new universities recruited, 1328 members in addition, 4 months competition; Phase 3: 4 new universities recruited in addition, 1210 members, 6 months competition. Results phase 1 and 2: 1,299 members eligible for analyzes: 800 females (62%), 499 males (38%); 86% non-Qataris, 14% Qatari nationals, daily step count 5,681 steps, age groups 18–24 (n=841; 68%) students, 25–64; (n=458; 35.3%) staff; 38% - low active, 37% physically active and 25% inactive. The AI main themes engaging stakeholders: awareness/education - 5 points (100%); competition, multi levels of involvement in SIH, community-based program/motivation - 4 points each (80%). The AI points represent themes’ repetition within stakeholders’ discussions. Results education interventions: 2 videos implementation, 35 000 educational materials, 3 online articles, 11 walking benefits lectures, 40 emails and sms notifications. Results community stakeholders’ interventions: 6 stakeholders meetings, 3 rewarding gatherings, 1 focus meeting, 40 individual reports, 18 overall reports. Results mass media campaign: 1 radio campaign, 7 press releases, 52 campuses newsletters. Results outcome evaluation: overall 2013-2016, the study used: 1 logic model, 3 PA holistic interventions, partnerships 15 universities, registered 4890 students and staff (aged 18-64 years), engaged 30 campuses stakeholders and 14 internal stakeholders; Total registered population: 61.5% female (2999), 38.5% male (1891), 20.2% (988) Qatari nationals, 79.8% (3902) non-Qataris, 55.5% (2710) students aged 18 – 25 years, 44.5% (2180) staff aged 26 - 64 years. Overall campaign 1,558 members eligible for analyzes: daily step count 7,923; 37% - low active, 43% physically active and 20% inactive. Conclusion: The study outcomes confirm program effectiveness and engagement of young campuses community, specifically female, in PA. The authors recommend implementations of 'holistic PA intervention program approach in Qatar' aiming to impact the community at national level for PA guidelines achievement in support of NCD prevention.Keywords: campuses, evaluation, Qatar, step-count
Procedia PDF Downloads 312348 The Practise of Hand Drawing as a Premier Form of Representation in Architectural Design Teaching: The Case of FAUP
Authors: Rafael Santos, Clara Pimenta Do Vale, Barbara Bogoni, Poul Henning Kirkegaard
Abstract:
In the last decades, the relevance of hand drawing has decreased in the scope of architectural education. However, some schools continue to recognize its decisive role, not only in the architectural design teaching, but in the whole of architectural training. With this paper it is intended to present the results of a research developed on the following problem: the practise of hand drawing as a premier form of representation in architectural design teaching. The research had as its object the educational model of the Faculty of Architecture of the University of Porto (FAUP) and was led by three main objectives: to identify the circumstance that promoted hand drawing as a form of representation in FAUP's model; to characterize the types of hand drawing and their role in that model; to determine the particularities of hand drawing as a premier form of representation in architectural design teaching. Methodologically, the research was conducted according to a qualitative embedded single-case study design. The object – i.e., the educational model – was approached in FAUP case considering its Context and three embedded unities of analysis: the educational Purposes, Principles and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is Assumed; the architectural design classes, expressing how the model is Achieved; and the students, expressing how the model is Acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal that the educational model of FAUP – following the model of the former Porto School – was largely due to the methodological foundations created with the hand drawing teaching-learning processes. In the absence of a culture of explicit theoretical elaboration or systematic research, hand drawing was the support for the continuity of the school, an expression of a unified thought about what should be the reflection and practice of architecture. As a form of representation, hand drawing plays a transversal role in the entire educational model, since its purposes are not limited to the conception of architectural design – it is also a means for perception, analysis and synthesis. Regarding the architectural design teaching, there seems to be an understanding of three complementary dimensions of didactics: the instrumental, methodological and propositional dimension. At FAUP, hand drawing is recognized as the common denominator among these dimensions, according to the idea of "globality of drawing". It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance and valorisation of FAUP’s model; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the hand drawing in architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.Keywords: architectural design teaching, architectural education, forms of representation, hand drawing
Procedia PDF Downloads 132347 The Effect of the Performance Evolution System on the Productivity of Administrating and a Case Study
Authors: Ertuğrul Ferhat Yilmaz, Ali Riza Perçin
Abstract:
In the business enterprises implemented modern business enterprise principles, the most important issues are increasing the performance of workers and getting maximum income. Through the twentieth century, rapid development of the sectors of data processing and communication and because of the free trade politics arising of multilateral business enterprises have canceled the economical borders and changed the local rivalry into the spherical rivalry. In this rivalry conditions, the business enterprises have to work active and productive in order to continue their existences. The employees worked at business enterprises have formed the most important factor of product. Therefore, the business enterprises inferring the importance of the human factors in order to increase the profit have used “the performance evolution system” to increase the success and development of the employees. The evolution of the performance is aimed to increase the manpower productive by using the employees in an active way. Furthermore, this system assists the wage politics implemented in business enterprise, determining the strategically plans in business enterprises through the short and long terms, being promoted and determining the educational needs of employees, making decisions as dismissing and work rotation. It requires a great deal of effort to catch the pace of change in the working realm and to keep up ourselves up-to-date. To get the quality in people,to have an effect in workplace depends largely on the knowledge and competence of managers and prospective managers. Therefore,managers need to use the performance evaluation systems in order to base their managerial decisions on sound data. This study aims at finding whether the organizations effectively use performance evaluation systms,how much importance is put on this issue and how much the results of the evaulations have an effect on employees. Whether the organizations have the advantage of competition and can keep on their activities depend to a large extent on how they effectively and efficiently use their employees.Therefore,it is of vital importance to evaluate employees' performance and to make them better according to the results of that evaluation. The performance evaluation system which evaluates the employees according to the criteria related to that organization has become one of the most important topics for management. By means of those important ends mentioned above,performance evaluation system seems to be a tool that can be used to improve the efficiency and effectiveness of organization. Because of its contribution to organizational success, thinking performance evaluation on the axis of efficiency shows the importance of this study on a different angle. In this study, we have explained performance evaluation system ,efficiency and the relation between those two concepts. We have also analyzed the results of questionnaires conducted on the textile workers in Edirne city.We have got positive answers from the questions about the effects of performance evaluation on efficiency.After factor analysis ,the efficiency and motivation which are determined as factors of performance evaluation system have the biggest variance (%19.703) in our sample. Thus, this study shows that objective performance evaluation increases the efficiency and motivation of employees.Keywords: performance, performance evolution system, productivity, Edirne region
Procedia PDF Downloads 305346 Navigate the Labyrinth of Leadership: Leaders’ Experiences in Saudi Higher Education
Authors: Laila Albughayl
Abstract:
The purpose of this qualitative case study was to explore Saudi females’ leadership journeys as they navigate the labyrinth of leadership in higher education. To gain a better understanding of how these leaders overcame challenges and accessed support as they progressed through the labyrinth to top positions in Saudi higher education. The significance of this research derived from the premise that leaders need to acquire essential leadership competencies such as knowledge, skills, and practices to effectively lead through economic transformation, growing globalism, and rapidly developing technology in an increasingly diverse world. In addition, understanding Saudi women’s challenges in the labyrinth will encourage policymakers to improve the situation under which these women work. The metaphor ‘labyrinth’ for Eagly and Carli (2007) encapsulates the winding paths, dead ends, and maze-like pathways that are full of challenges and supports that women traverse to access and maintain leadership positions was used. In this study, ‘labyrinth’ was used as the conceptual framework to explore women leaders’ challenges and opportunities in leadership in Saudi higher education. A proposed model for efficient navigation of the labyrinth of leadership was used. This model focused on knowledge, skills, and behaviours (KSB) as the analytical framework for examining responses to the research questions. This research was conducted using an interpretivist qualitative approach. A case study was the methodology used. Semi-structured interviews were the main data collection method. Purposive sampling was used to select ten Saudi leaders in three public universities. In coding, the 6-step framework of thematic analysis for Braun and Clarke was used to identify, analyze, and report themes within the data. NVivo software was also used as a tool to assist with managing and organizing the data. The resultant findings showed that the challenges identified by participants in navigating the labyrinth of leadership in Saudi higher education replicated some of those identified in the literature. The onset findings also revealed that the organizational barriers in Saudi higher education came as the top hindrance to women’s advancement in the labyrinth of leadership, followed by societal barriers. The findings also showed that women’s paths in the labyrinth of leadership in higher education were still convoluted and tedious compared to their male counterparts. In addition, the findings revealed that Saudi women leaders use significant strategies to access leadership posts and effectively navigate the labyrinth; this was not indicated in the literature. In addition, the resultant findings revealed that there are keys that assisted Saudi female leaders in effectively navigating the labyrinth of leadership. For example, the findings indicated that spirituality (religion) was a powerful key that enabled Saudi women leaders to pursue and persist in their leadership paths. Based on participants' experiences, a compass for effective navigation of the labyrinth of leadership in higher education was created for current and aspirant Saudi women leaders to follow. Finally, the findings had several significant implications for practice, policy, theory, and future research.Keywords: women, leadership, labyrinth, higher education
Procedia PDF Downloads 84345 Working at the Interface of Health and Criminal Justice: An Interpretative Phenomenological Analysis Exploration of the Experiences of Liaison and Diversion Nurses – Emerging Findings
Authors: Sithandazile Masuku
Abstract:
Introduction: Public health approaches to offender mental health are driven by international policies and frameworks in response to the disproportionately large representation of people with mental health problems within the offender pathway compared to the general population. Public health service innovations include mental health courts in the US, restorative models in Singapore and, liaison and diversion services in Australia, the UK, and some other European countries. Mental health nurses are at the forefront of offender health service innovations. In the U.K. context, police custody has been identified as an early point within the offender pathway where nurses can improve outcomes by offering assessments and share information with criminal justice partners. This scope of nursing practice has introduced challenges related to skills and support required for nurses working at the interface of health and the criminal justice system. Parallel literature exploring experiences of nurses working in forensic settings suggests the presence of compassion fatigue, burnout and vicarious trauma that may impede risk harm to the nurses in these settings. Published research explores mainly service-level outcomes including monitoring of figures indicative of a reduction in offending behavior. There is minimal research exploring the experiences of liaison and diversion nurses who are situated away from a supportive clinical environment and engaged in complex autonomous decision-making. Aim: This paper will share qualitative findings (in progress) from a PhD study that aims to explore the experiences of liaison and diversion nurses in one service in the U.K. Methodology: This is a qualitative interview study conducted using an Interpretative Phenomenological Analysis to gain an in-depth analysis of lived experiences. Methods: A purposive sampling technique was used to recruit n=8 mental health nurses registered with the UK professional body, Nursing and Midwifery Council, from one UK Liaison and Diversion service. All participants were interviewed online via video call using semi-structured interview topic guide. Data were recorded and transcribed verbatim. Data were analysed using the seven steps of the Interpretative Phenomenological Analysis data analysis method. Emerging Findings Analysis to date has identified pertinent themes: • Difficulties of meaning-making for nurses because of the complexity of their boundary spanning role. • Emotional burden experienced in a highly emotive and fast-changing environment. • Stress and difficulties with role identity impacting on individual nurses’ ability to be resilient. • Challenges to wellbeing related to a sense of isolation when making complex decisions. Conclusion Emerging findings have highlighted the lived experiences of nurses working in liaison and diversion as challenging. The nature of the custody environment has an impact on role identity and decision making. Nurses left feeling isolated and unsupported are less resilient and may go on to experience compassion fatigue. The findings from this study thus far point to a need to connect nurses working in these boundary spanning roles with a supportive infrastructure where the complexity of their role is acknowledged, and they can be connected with a health agenda. In doing this, the nurses would be protected from harm and the likelihood of sustained positive outcomes for service users is optimised.Keywords: liaison and diversion, nurse experiences, offender health, staff wellbeing
Procedia PDF Downloads 137344 Anti-Graft Instruments and Their Role in Curbing Corruption: Integrity Pact and Its Impact on Indian Procurement
Authors: Jot Prakash Kaur
Abstract:
The paper aims to showcase that with the introduction of anti-graft instruments and willingness of the governments towards their implementation, a significant change can be witnessed in the anti-corruption landscape of any country. Since the past decade anti-graft instruments have been introduced by several international non-governmental organizations with the vision of curbing corruption. Transparency International’s ‘Integrity Pact’ has been one such initiative. Integrity Pact has been described as a tool for preventing corruption in public contracting. Integrity Pact has found its relevance in a developing country like India where public procurement constitutes 25-30 percent of Gross Domestic Product. Corruption in public procurement has been a cause of concern even though India has in place a whole architecture of rules and regulations governing public procurement. Integrity Pact was first adopted by a leading Oil and Gas government company in 2006. Till May 2015, over ninety organizations had adopted Integrity Pact, of which majority of them are central government units. The methodology undertaken to understand impact of Integrity Pact on Public procurement is through analyzing information received from important stakeholders of the instrument. Government, information was sought through Right to Information Act 2005 about the details of adoption of this instrument by various government organizations and departments. Contractor, Company websites and annual reports were used to find out the steps taken towards implementation of Integrity Pact. Civil Society, Transparency International India’s resource materials which include publications and reports on Integrity Pact were also used to understand the impact of Integrity Pact. Some of the findings of the study include organizations adopting Integrity pacts in all kinds of contracts such that 90% of their procurements fall under Integrity Pact. Indian State governments have found merit in Integrity Pact and have adopted it in their procurement contracts. Integrity Pact has been instrumental in creating a brand image of companies. External Monitors, an essential feature of Integrity Pact have emerged as arbitrators for the bidders and are the first line of procurement auditors for the organizations. India has cancelled two defense contracts finding it conflicting with the provisions of Integrity Pact. Some of the clauses of Integrity Pact have been included in the proposed Public Procurement legislation. Integrity Pact has slowly but steadily grown to become an integral part of big ticket procurement in India. Government’s commitment to implement Integrity Pact has changed the way in which public procurement is conducted in India. Public Procurement was a segment infested with corruption but with the adoption of Integrity Pact a number of clean up acts have been performed to make procurement transparent. The paper is divided in five sections. First section elaborates on Integrity Pact. Second section talks about stakeholders of the instrument and the role it plays in its implementation. Third section talks about the efforts taken by the government to implement Integrity Pact in India. Fourth section talks about the role of External Monitor as Arbitrator. The final section puts forth suggestions to strengthen the existing form of Integrity Pact and increase its reach.Keywords: corruption, integrity pact, procurement, vigilance
Procedia PDF Downloads 342343 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 79342 The Sustained Utility of Japan's Human Security Policy
Authors: Maria Thaemar Tana
Abstract:
The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.Keywords: human security, foreign policy, neoclassical realism, peace-building
Procedia PDF Downloads 135341 Evaluation of the Boiling Liquid Expanding Vapor Explosion Thermal Effects in Hassi R'Mel Gas Processing Plant Using Fire Dynamics Simulator
Authors: Brady Manescau, Ilyas Sellami, Khaled Chetehouna, Charles De Izarra, Rachid Nait-Said, Fati Zidani
Abstract:
During a fire in an oil and gas refinery, several thermal accidents can occur and cause serious damage to people and environment. Among these accidents, the BLEVE (Boiling Liquid Expanding Vapor Explosion) is most observed and remains a major concern for risk decision-makers. It corresponds to a violent vaporization of explosive nature following the rupture of a vessel containing a liquid at a temperature significantly higher than its normal boiling point at atmospheric pressure. Their effects on the environment generally appear in three ways: blast overpressure, radiation from the fireball if the liquid involved is flammable and fragment hazards. In order to estimate the potential damage that would be caused by such an explosion, risk decision-makers often use quantitative risk analysis (QRA). This analysis is a rigorous and advanced approach that requires a reliable data in order to obtain a good estimate and control of risks. However, in most cases, the data used in QRA are obtained from the empirical correlations. These empirical correlations generally overestimate BLEVE effects because they are based on simplifications and do not take into account real parameters like the geometry effect. Considering that these risk analyses are based on an assessment of BLEVE effects on human life and plant equipment, more precise and reliable data should be provided. From this point of view, the CFD modeling of BLEVE effects appears as a solution to the empirical law limitations. In this context, the main objective is to develop a numerical tool in order to predict BLEVE thermal effects using the CFD code FDS version 6. Simulations are carried out with a mesh size of 1 m. The fireball source is modeled as a vertical release of hot fuel in a short time. The modeling of fireball dynamics is based on a single step combustion using an EDC model coupled with the default LES turbulence model. Fireball characteristics (diameter, height, heat flux and lifetime) issued from the large scale BAM experiment are used to demonstrate the ability of FDS to simulate the various steps of the BLEVE phenomenon from ignition up to total burnout. The influence of release parameters such as the injection rate and the radiative fraction on the fireball heat flux is also presented. Predictions are very encouraging and show good agreement in comparison with BAM experiment data. In addition, a numerical study is carried out on an operational propane accumulator in an Algerian gas processing plant of SONATRACH company located in the Hassi R’Mel Gas Field (the largest gas field in Algeria).Keywords: BLEVE effects, CFD, FDS, fireball, LES, QRA
Procedia PDF Downloads 186340 Life Cycle Assessment Applied to Supermarket Refrigeration System: Effects of Location and Choice of Architecture
Authors: Yasmine Salehy, Yann Leroy, Francois Cluzel, Hong-Minh Hoang, Laurence Fournaison, Anthony Delahaye, Bernard Yannou
Abstract:
Taking into consideration all the life cycle of a product is now an important step in the eco-design of a product or a technology. Life cycle assessment (LCA) is a standard tool to evaluate the environmental impacts of a system or a process. Despite the improvement in refrigerant regulation through protocols, the environmental damage of refrigeration systems remains important and needs to be improved. In this paper, the environmental impacts of refrigeration systems in a typical supermarket are compared using the LCA methodology under different conditions. The system is used to provide cold at two levels of temperature: medium and low temperature during a life period of 15 years. The most commonly used architectures of supermarket cold production systems are investigated: centralized direct expansion systems and indirect systems using a secondary loop to transport the cold. The variation of power needed during seasonal changes and during the daily opening/closure periods of the supermarket are considered. R134a as the primary refrigerant fluid and two types of secondary fluids are considered. The composition of each system and the leakage rate of the refrigerant through its life cycle are taken from the literature and industrial data. Twelve scenarios are examined. They are based on the variation of three parameters, 1. location: France (Paris), Spain (Toledo) and Sweden (Stockholm), 2. different sources of electric consumption: photovoltaic panels and low voltage electric network and 3. architecture: direct and indirect refrigeration systems. OpenLCA, SimaPro softwares, and different impact assessment methods were compared; CML method is used to evaluate the midpoint environmental indicators. This study highlights the significant contribution of electric consumption in environmental damages compared to the impacts of refrigerant leakage. The secondary loop allows lowering the refrigerant amount in the primary loop which results in a decrease in the climate change indicators compared to the centralized direct systems. However, an exhaustive cost evaluation (CAPEX and OPEX) of both systems shows more important costs related to the indirect systems. A significant difference between the countries has been noticed, mostly due to the difference in electric production. In Spain, using photovoltaic panels helps to reduce efficiently the environmental impacts and the related costs. This scenario is the best alternative compared to the other scenarios. Sweden is a country with less environmental impacts. For both France and Sweden, the use of photovoltaic panels does not bring a significant difference, due to a less sunlight exposition than in Spain. Alternative solutions exist to reduce the impact of refrigerating systems, and a brief introduction is presented.Keywords: eco-design, industrial engineering, LCA, refrigeration system
Procedia PDF Downloads 191339 Optimal Pressure Control and Burst Detection for Sustainable Water Management
Authors: G. K. Viswanadh, B. Rajasekhar, G. Venkata Ramana
Abstract:
Water distribution networks play a vital role in ensuring a reliable supply of clean water to urban areas. However, they face several challenges, including pressure control, pump speed optimization, and burst event detection. This paper combines insights from two studies to address these critical issues in Water distribution networks, focusing on the specific context of Kapra Municipality, India. The first part of this research concentrates on optimizing pressure control and pump speed in complex Water distribution networks. It utilizes the EPANET- MATLAB Toolkit to integrate EPANET functionalities into the MATLAB environment, offering a comprehensive approach to network analysis. By optimizing Pressure Reduce Valves (PRVs) and variable speed pumps (VSPs), this study achieves remarkable results. In the Benchmark Water Distribution System (WDS), the proposed PRV optimization algorithm reduces average leakage by 20.64%, surpassing the previous achievement of 16.07%. When applied to the South-Central and East zone WDS of Kapra Municipality, it identifies PRV locations that were previously missed by existing algorithms, resulting in average leakage reductions of 22.04% and 10.47%. These reductions translate to significant daily Water savings, enhancing Water supply reliability and reducing energy consumption. The second part of this research addresses the pressing issue of burst event detection and localization within the Water Distribution System. Burst events are a major contributor to Water losses and repair expenses. The study employs wireless sensor technology to monitor pressure and flow rate in real time, enabling the detection of pipeline abnormalities, particularly burst events. The methodology relies on transient analysis of pressure signals, utilizing Cumulative Sum and Wavelet analysis techniques to robustly identify burst occurrences. To enhance precision, burst event localization is achieved through meticulous analysis of time differentials in the arrival of negative pressure waveforms across distinct pressure sensing points, aided by nodal matrix analysis. To evaluate the effectiveness of this methodology, a PVC Water pipeline test bed is employed, demonstrating the algorithm's success in detecting pipeline burst events at flow rates of 2-3 l/s. Remarkably, the algorithm achieves a localization error of merely 3 meters, outperforming previously established algorithms. This research presents a significant advancement in efficient burst event detection and localization within Water pipelines, holding the potential to markedly curtail Water losses and the concomitant financial implications. In conclusion, this combined research addresses critical challenges in Water distribution networks, offering solutions for optimizing pressure control, pump speed, burst event detection, and localization. These findings contribute to the enhancement of Water Distribution System, resulting in improved Water supply reliability, reduced Water losses, and substantial cost savings. The integrated approach presented in this paper holds promise for municipalities and utilities seeking to improve the efficiency and sustainability of their Water distribution networks.Keywords: pressure reduce valve, complex networks, variable speed pump, wavelet transform, burst detection, CUSUM (Cumulative Sum), water pipeline monitoring
Procedia PDF Downloads 88338 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 380337 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 31336 A Self-Heating Gas Sensor of SnO2-Based Nanoparticles Electrophoretic Deposited
Authors: Glauco M. M. M. Lustosa, João Paulo C. Costa, Sonia M. Zanetti, Mario Cilense, Leinig Antônio Perazolli, Maria Aparecida Zaghete
Abstract:
The contamination of the environment has been one of the biggest problems of our time, mostly due to developments of many industries. SnO2 is an n-type semiconductor with band gap about 3.5 eV and has its electrical conductivity dependent of type and amount of modifiers agents added into matrix ceramic during synthesis process, allowing applications as sensing of gaseous pollutants on ambient. The chemical synthesis by polymeric precursor method consists in a complexation reaction between tin ion and citric acid at 90 °C/2 hours and subsequently addition of ethyleneglycol for polymerization at 130 °C/2 hours. It also prepared polymeric resin of zinc, cobalt and niobium ions. Stoichiometric amounts of the solutions were mixed to obtain the systems (Zn, Nb)-SnO2 and (Co, Nb) SnO2 . The metal immobilization reduces its segregation during the calcination resulting in a crystalline oxide with high chemical homogeneity. The resin was pre-calcined at 300 °C/1 hour, milled in Atritor Mill at 500 rpm/1 hour, and then calcined at 600 °C/2 hours. X-Ray Diffraction (XDR) indicated formation of SnO2 -rutile phase (JCPDS card nº 41-1445). The characterization by Scanning Electron Microscope of High Resolution showed spherical ceramic powder nanostructured with 10-20 nm of diameter. 20 mg of SnO2 -based powder was kept in 20 ml of isopropyl alcohol and then taken to an electrophoretic deposition (EPD) system. The EPD method allows control the thickness films through the voltage or current applied in the electrophoretic cell and by the time used for deposition of ceramics particles. This procedure obtains films in a short time with low costs, bringing prospects for a new generation of smaller size devices with easy integration technology. In this research, films were obtained in an alumina substrate with interdigital electrodes after applying 2 kV during 5 and 10 minutes in cells containing alcoholic suspension of (Zn, Nb)-SnO2 and (Co, Nb) SnO2 of powders, forming a sensing layer. The substrate has designed integrated micro hotplates that provide an instantaneous and precise temperature control capability when a voltage is applied. The films were sintered at 900 and 1000 °C in a microwave oven of 770 W, adapted by the research group itself with a temperature controller. This sintering is a fast process with homogeneous heating rate which promotes controlled growth of grain size and also the diffusion of modifiers agents, inducing the creation of intrinsic defects which will change the electrical characteristics of SnO2 -based powders. This study has successfully demonstrated a microfabricated system with an integrated micro-hotplate for detection of CO and NO2 gas at different concentrations and temperature, with self-heating SnO2 - based nanoparticles films, being suitable for both industrial process monitoring and detection of low concentrations in buildings/residences in order to safeguard human health. The results indicate the possibility for development of gas sensors devices with low power consumption for integration in portable electronic equipment with fast analysis. Acknowledgments The authors thanks to the LMA-IQ for providing the FEG-SEM images, and the financial support of this project by the Brazilian research funding agencies CNPq, FAPESP 2014/11314-9 and CEPID/CDMF- FAPESP 2013/07296-2.Keywords: chemical synthesis, electrophoretic deposition, self-heating, gas sensor
Procedia PDF Downloads 276335 Analysis of Influencing Factors on Infield-Logistics: A Survey of Different Farm Types in Germany
Authors: Michael Mederle, Heinz Bernhardt
Abstract:
The Management of machine fleets or autonomous vehicle control will considerably increase efficiency in future agricultural production. Especially entire process chains, e.g. harvesting complexes with several interacting combine harvesters, grain carts, and removal trucks, provide lots of optimization potential. Organization and pre-planning ensure to get these efficiency reserves accessible. One way to achieve this is to optimize infield path planning. Particularly autonomous machinery requires precise specifications about infield logistics to be navigated effectively and process optimized in the fields individually or in machine complexes. In the past, a lot of theoretical optimization has been done regarding infield logistics, mainly based on field geometry. However, there are reasons why farmers often do not apply the infield strategy suggested by mathematical route planning tools. To make the computational optimization more useful for farmers this study focuses on these influencing factors by expert interviews. As a result practice-oriented navigation not only to the field but also within the field will be possible. The survey study is intended to cover the entire range of German agriculture. Rural mixed farms with simple technology equipment are considered as well as large agricultural cooperatives which farm thousands of hectares using track guidance and various other electronic assistance systems. First results show that farm managers using guidance systems increasingly attune their infield-logistics on direction giving obstacles such as power lines. In consequence, they can avoid inefficient boom flippings while doing plant protection with the sprayer. Livestock farmers rather focus on the application of organic manure with its specific requirements concerning road conditions, landscape terrain or field access points. Cultivation of sugar beets makes great demands on infield patterns because of its particularities such as the row crop system or high logistics demands. Furthermore, several machines working in the same field simultaneously influence each other, regardless whether or not they are of the equal type. Specific infield strategies always are based on interactions of several different influences and decision criteria. Single working steps like tillage, seeding, plant protection or harvest mostly cannot be considered each individually. The entire production process has to be taken into consideration to detect the right infield logistics. One long-term objective of this examination is to integrate the obtained influences on infield strategies as decision criteria into an infield navigation tool. In this way, path planning will become more practical for farmers which is a basic requirement for automatic vehicle control and increasing process efficiency.Keywords: autonomous vehicle control, infield logistics, path planning, process optimizing
Procedia PDF Downloads 233