Search results for: power technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12862

Search results for: power technology

1552 Argos System: Improvements and Future of the Constellation

Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard

Abstract:

Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.

Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services

Procedia PDF Downloads 154
1551 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer

Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang

Abstract:

In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.

Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide

Procedia PDF Downloads 296
1550 Effect of Nanoparticles on Wheat Seed Germination and Seedling Growth

Authors: Pankaj Singh Rawat, Rajeew Kumar, Pradeep Ram, Priyanka Pandey

Abstract:

Wheat is an important cereal crop for food security. Boosting the wheat production and productivity is the major challenge across the nation. Good quality of seed is required for maintaining optimum plant stand which ultimately increases grain yield. Ensuring a good germination is one of the key steps to ensure proper plant stand and moisture assurance during seed germination may help to speed up the germination. The tiny size of nanoparticles may help in entry of water into seed without disturbing their internal structure. Considering above, a laboratory experiment was conducted during 2012-13 at G.B. Pant University of Agriculture and Technology, Pantnagar, India. The completely randomized design was used for statistical analysis. The experiment was conducted in two phases. In the first phase, the appropriate concentration of nanoparticles for seed treatment was screened. In second phase seed soaking hours of nanoparticles for better seed germination were standardized. Wheat variety UP2526 was taken as test crop. Four nanoparticles (TiO2, ZnO, nickel and chitosan) were taken for study. The crop germination studies were done in petri dishes and standard package and practices were used to raise the seedlings. The germination studies were done by following standard procedure. In first phase of the experiment, seeds were treated with 50 and 300 ppm of nanoparticles and control was also maintained for comparison. In the second phase of experiment, seeds were soaked for 4 hours, 6 hours and 8 hours with 50 ppm nanoparticles of TiO2, ZnO, nickel and chitosan along with control treatment to identify the soaking time for better seed germination. Experiment revealed that the application of nanoparticles help to enhance seed germination. The study revealed that seed treatment with  nanoparticles at 50 ppm concentration increases root length, shoot length, seedling length, shoot dry weight, seedling dry weight, seedling vigour index I and seedling vigour index II as compared to seed soaking at 300 ppm concentration. This experiment showed that seed soaking up to 4 hr was better as compared to 6 and 8 hrs. Seed soaking with nanoparticles specially TiO2, ZnO, and chitosan proved to enhance germination and seedling growth indices of wheat crop.

Keywords: nanoparticles, seed germination, seed soaking, wheat

Procedia PDF Downloads 207
1549 To Identify the Importance of Telemedicine in Diabetes and Its Impact on Hba1c

Authors: Sania Bashir

Abstract:

A promising approach to healthcare delivery, telemedicine makes use of communication technology to reach out to remote regions of the world, allowing for beneficial interactions between diabetic patients and healthcare professionals as well as the provision of affordable and easily accessible medical care. The emergence of contemporary care models, fueled by the pervasiveness of mobile devices, provides better information, offers low cost with the best possible outcomes, and is known as digital health. It involves the integration of collected data using software and apps, as well as low-cost, high-quality outcomes. The goal of this study is to assess how well telemedicine works for diabetic patients and how it impacts their HbA1c levels. A questionnaire-based survey of 300 diabetics included 150 patients in each of the groups receiving usual care and via telemedicine. A descriptive and observational study that lasted from September 2021 to May 2022 was conducted. HbA1c has been gathered for both categories every three months. A remote monitoring tool has been used to assess the efficacy of telemedicine and continuing therapy instead of the customary three monthly meetings like in-person consultations. The patients were (42.3) 18.3 years old on average. 128 men were outnumbered by 172 women (57.3% of the total). 200 patients (66.6%) have type 2 diabetes, compared to over 100 (33.3%) candidates for type 1. Despite the average baseline BMI being within normal ranges at 23.4 kg/m², the mean baseline HbA1c (9.45 1.20) indicates that glycemic treatment is not well-controlled at the time of registration. While patients who use telemedicine experienced a mean percentage change of 10.5, those who visit the clinic experienced a mean percentage change of 3.9. Changes in HbA1c are dependent on several factors, including improvements in BMI (61%) after 9 months of research and compliance with healthy lifestyle recommendations for diet and activity. More compliance was achieved by the telemedicine group. It is an undeniable reality that patient-physician communication is crucial for enhancing health outcomes and avoiding long-term issues. Telemedicine has shown its value in the management of diabetes and holds promise as a novel technique for improved clinical-patient communication in the twenty-first century.

Keywords: diabetes, digital health, mobile app, telemedicine

Procedia PDF Downloads 71
1548 Understanding the Classification of Rain Microstructure and Estimation of Z-R Relationship using a Micro Rain Radar in Tropical Region

Authors: Tomiwa, Akinyemi Clement

Abstract:

Tropical regions experience diverse and complex precipitation patterns, posing significant challenges for accurate rainfall estimation and forecasting. This study addresses the problem of effectively classifying tropical rain types and refining the Z-R (Reflectivity-Rain Rate) relationship to enhance rainfall estimation accuracy. Through a combination of remote sensing, meteorological analysis, and machine learning, the research aims to develop an advanced classification framework capable of distinguishing between different types of tropical rain based on their unique characteristics. This involves utilizing high-resolution satellite imagery, radar data, and atmospheric parameters to categorize precipitation events into distinct classes, providing a comprehensive understanding of tropical rain systems. Additionally, the study seeks to improve the Z-R relationship, a crucial aspect of rainfall estimation. One year of rainfall data was analyzed using a Micro Rain Radar (MRR) located at The Federal University of Technology Akure, Nigeria, measuring rainfall parameters from ground level to a height of 4.8 km with a vertical resolution of 0.16 km. Rain rates were classified into low (stratiform) and high (convective) based on various microstructural attributes such as rain rates, liquid water content, Drop Size Distribution (DSD), average fall speed of the drops, and radar reflectivity. By integrating diverse datasets and employing advanced statistical techniques, the study aims to enhance the precision of Z-R models, offering a more reliable means of estimating rainfall rates from radar reflectivity data. This refined Z-R relationship holds significant potential for improving our understanding of tropical rain systems and enhancing forecasting accuracy in regions prone to heavy precipitation.

Keywords: remote sensing, precipitation, drop size distribution, micro rain radar

Procedia PDF Downloads 11
1547 Exploring the Perspective of Service Quality in mHealth Services during the COVID-19 Pandemic

Authors: Wan-I Lee, Nelio Mendoza Figueredo

Abstract:

The impact of COVID-19 has a significant effect on all sectors of society globally. Health information technology (HIT) has become an effective health strategy in this age of distancing. In this regard, Mobile Health (mHealth) plays a critical role in managing patient and provider workflows during the COVID-19 pandemic. Therefore, the users' perception of service quality about mHealth services plays a significant role in shaping confidence and subsequent behaviors regarding the mHealth users' intention of use. This study's objective was to explore levels of user attributes analyzed by a qualitative method of how health practitioners and patients are satisfied or dissatisfied with using mHealth services; and analyzed the users' intention in the context of Taiwan during the COVID-19 pandemic. This research explores the experienced usability of a mHealth services during the Covid-19 pandemic. This study uses qualitative methods that include in-depth and semi-structured interviews that investigate participants' perceptions and experiences and the meanings they attribute to them. The five cases consisted of health practitioners, clinic staff, and patients' experiences using mHealth services. This study encourages participants to discuss issues related to the research question by asking open-ended questions, usually in one-to-one interviews. The findings show the positive and negative attributes of mHealth service quality. Hence, the significant importance of patients' and health practitioners' issues on several dimensions of perceived service quality is system quality, information quality, and interaction quality. A concept map for perceptions regards to emergency uses' intention of mHealth services process is depicted. The findings revealed that users pay more attention to "Medical care", "ease of use" and "utilitarian benefits" and have less importance for "Admissions and Convenience" and "Social influence". To improve mHealth services, the mHealth providers and health practitioners should better manage users' experiences to enhance mHealth services. This research contributes to the understanding of service quality issues in mHealth services during the COVID-19 pandemic.

Keywords: COVID-19, mobile health, service quality, use intention

Procedia PDF Downloads 131
1546 Genetics, Law and Society: Regulating New Genetic Technologies

Authors: Aisling De Paor

Abstract:

Scientific and technological developments are driving genetics and genetic technologies into the public sphere. Scientists are making genetic discoveries as to the make up of the human body and the cause and effect of disease, diversity and disability amongst individuals. Technological innovation in the field of genetics is also advancing, with the development of genetic testing, and other emerging genetic technologies, including gene editing (which offers the potential for genetic modification). In addition to the benefits for medicine, health care and humanity, these genetic advances raise a range of ethical, legal and societal concerns. From an ethical perspective, such advances may, for example, change the concept of humans and what it means to be human. Science may take over in conceptualising human beings, which may push the boundaries of existing human rights. New genetic technologies, particularly gene editing techniques create the potential to stigmatise disability, by highlighting disability or genetic difference as something that should be eliminated or anticipated. From a disability perspective, use (and misuse) of genetic technologies raise concerns about discrimination and violations to the dignity and integrity of the individual. With an acknowledgement of the likely future orientation of genetic science, and in consideration of the intersection of genetics and disability, this paper highlights the main concerns raised as genetic science and technology advances (particularly with gene editing developments), and the consequences for disability and human rights. Through the use of traditional doctrinal legal methodologies, it investigates the use (and potential misuse) of gene editing as creating the potential for a unique form of discrimination and stigmatization to develop, as well as a potential gateway to a form of new, subtle eugenics. This article highlights the need to maintain caution as to the use, application and the consequences of genetic technologies. With a focus on the law and policy position in Europe, it examines the need to control and regulate these new technologies, particularly gene editing. In addition to considering the need for regulation, this paper highlights non-normative approaches to address this area, including awareness raising and education, public discussion and engagement with key stakeholders in the field and the development of a multifaceted genetics advisory network.

Keywords: disability, gene-editing, genetics, law, regulation

Procedia PDF Downloads 343
1545 Intelligent Indoor Localization Using WLAN Fingerprinting

Authors: Gideon C. Joseph

Abstract:

The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.

Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression

Procedia PDF Downloads 327
1544 Violence against Women: A Study on the Aggressors' Profile

Authors: Giovana Privatte Maciera, Jair Izaías Kappann

Abstract:

Introduction: The violence against woman is a complex phenomenon that accompanies the woman throughout her life and is a result of a social, cultural, political and religious construction, based on the differences among the genders. Those differences are felt, mainly, because of the patriarchal system that is still present which just naturalize and legitimate the asymmetry of power. As consequence of the women’s lasting historical and collective effort for a legislation against the impunity of violence against women in the national scenery, it was ordained, in 2006, a law known as Maria da Penha. The law was created as a protective measure for women that were victims of violence and consequently for the punishment of the aggressor. Methodology: Analysis of police inquiries is established by the Police Station of Defense of the Woman of Assis city, by formal authorization of the justice, in the period of 2013 to 2015. For the evaluating of the results will be used the content analysis and the theoretical referential of Psychoanalysis. Results and Discussion: The final analysis of the inquiries demonstrated that the violence against women is reproduced by the society and the aggressor, in most cases it is a member of their own family, mainly the current or former-spouse. The most common kinds of aggression were: the threat bodily harm, and the physical violence, that normally happens accompanied by psychological violence, being the most painful for the victims. The biggest part of the aggressors was white, older than the victim, worker and had primary school. But, unlike the expected, the minority of the aggressors were users of alcohol and/or drugs and possessed children in common with the victim. There is a contrast among the number of victims who already admitted have suffered some type of violence earlier by the same aggressor and the number of victims who has registered the occurrence before. The aggressors often use the discourse of denial in their testimony or try to justify their act like the blame was of the victim. It is believed in the interaction of several factors that can influence the aggressor to commit the abuse, including psychological, personal and sociocultural factors. One hypothesis is that the aggressor has a violence history in the family origin. After the aggressor being judged, condemned or not, usually there is no rehabilitation plan or supervision that enable his change. Conclusions: It has noticed the importance of studying the aggressor’s characteristics and the reasons that took him to commit such violence, making possible the implementation of an appropriate treatment to prevent and reduce the aggressions, as well the creation of programs and actions that enable communication and understanding concerning the theme. This is because the recurrence is still high, since the punitive system is not enough and the law is still ineffective and inefficient in certain aspects and in its own functioning. It is perceived a compulsion in repeat so much for the victims as for the aggressors, because they end involving, almost always, in disturbed and violent relationships, with the relation of subordination-dominance as characteristic.

Keywords: aggressors' profile, gender equality, Maria da Penha law, violence against women

Procedia PDF Downloads 318
1543 Ni-W-P Alloy Coating as an Alternate to Electroplated Hard Cr Coating

Authors: S. K. Ghosh, C. Srivastava, P. K. Limaye, V. Kain

Abstract:

Electroplated hard chromium is widely known in coatings and surface finishing, automobile and aerospace industries because of its excellent hardness, wear resistance and corrosion properties. However, its precursor, Cr+6 is highly carcinogenic in nature and a consensus has been adopted internationally to eradicate this coating technology with an alternative one. The search for alternate coatings to electroplated hard chrome is continuing worldwide. Various alloys and nanocomposites like Co-W alloys, Ni-Graphene, Ni-diamond nanocomposites etc. have already shown promising results in this regard. Basically, in this study, electroless Ni-P alloys with excellent corrosion resistance was taken as the base matrix and incorporation of tungsten as third alloying element was considered to improve the hardness and wear resistance of the resultant alloy coating. The present work is focused on the preparation of Ni–W–P coatings by electrodeposition with different content of phosphorous and its effect on the electrochemical, mechanical and tribological performances. The results were also compared with Ni-W alloys. Composition analysis by EDS showed deposition of Ni-32.85 wt% W-3.84 wt% P (designated as Ni-W-LP) and Ni-18.55 wt% W-8.73 wt% P (designated as Ni-W-HP) alloy coatings from electrolytes containing of 0.006 and 0.01M sodium hypophosphite respectively. Inhibition of tungsten deposition in the presence of phosphorous was noted. SEM investigation showed cauliflower like growth along with few microcracks. The as-deposited Ni-W-P alloy coating was amorphous in nature as confirmed by XRD investigation and step-wise crystallization was noticed upon annealing at higher temperatures. For all the coatings, the nanohardness was found to increase after heat-treatment and typical nanonahardness values obtained for 400°C annealed samples were 18.65±0.20 GPa, 20.03±0.25 GPa, and 19.17±0.25 for alloy coatings Ni-W, Ni-W-LP and Ni-W-HP respectively. Therefore, the nanohardness data show very promising results. Wear and coefficient of friction data were recorded by applying a different normal load in reciprocating motion using a ball on plate geometry. Post experiment, the wear mechanism was established by detail investigation of wear-scar morphology. Potentiodynamic measurements showed coating with a high content of phosphorous was most corrosion resistant in 3.5wt% NaCl solution.

Keywords: corrosion, electrodeposition, nanohardness, Ni-W-P alloy coating

Procedia PDF Downloads 339
1542 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 89
1541 Evaluation of Paper Effluent with Two Bacterial Strain and Their Consortia

Authors: Priya Tomar, Pallavi Mittal

Abstract:

As industrialization is inevitable and progress with rapid acceleration, the need for innovative ways to get rid of waste has increased. Recent advancement in bioresource technology paves novel ideas for recycling of factory waste that has been polluting the agro-industry, soil and water bodies. Paper industries in India are in a considerable number, where molasses and impure alcohol are still being used as raw materials for manufacturing of paper. Paper mills based on nonconventional agro residues are being encouraged due to increased demand of paper and acute shortage of forest-based raw materials. The colouring body present in the wastewater from pulp and paper mill is organic in nature and is comprised of wood extractives, tannin, resins, synthetic dyes, lignin and its degradation products formed by the action of chlorine on lignin which imparts an offensive colour to the water. These mills use different chemical process for paper manufacturing due to which lignified chemicals are released into the environment. Therefore, the chemical oxygen demand (COD) of the emanating stream is quite high. This paper presents some new techniques that were developed for the efficiency of bioremediation on paper industry. A short introduction to paper industry and a variety of presently available methods of bioremediation on paper industry and different strategies are also discussed here. For solving the above problem, two bacterial strains (Pseudomonas aeruginosa and Bacillus subtilis) and their consortia (Pseudomonas aeruginosa and Bacillus subtilis) were utilized for the pulp and paper mill effluent. Pseudomonas aeruginosa and Bacillus subtilis named as T–1, T–2, T–3, T–4, T–5, T–6, for the decolourisation of paper industry effluent. The results indicated that a maximum colour reduction is (60.5%) achieved by Pseudomonas aeruginosa and COD reduction is (88.8%) achieved by Bacillus subtilis, maximum pH changes is (4.23) achieved by Pseudomonas aeruginosa, TSS reduction is (2.09 %) achieved by Bacillus subtilis, and TDS reduction is (0.95 %) achieved by Bacillus subtilis. When the wastewater was supplemented with carbon (glucose) and nitrogen (yeast extract) source and data revealed the efficiency of Bacillus subtilis, having more with glucose than Pseudomonas aeruginosa.

Keywords: bioremediation, paper and pulp mill effluent, treated effluent, lignin

Procedia PDF Downloads 235
1540 Systematic and Meta-Analysis of Navigation in Oral and Maxillofacial Trauma and Impact of Machine Learning and AI in Management

Authors: Shohreh Ghasemi

Abstract:

Introduction: Managing oral and maxillofacial trauma is a multifaceted challenge, as it can have life-threatening consequences and significant functional and aesthetic impact. Navigation techniques have been introduced to improve surgical precision to meet this challenge. A machine learning algorithm was also developed to support clinical decision-making regarding treating oral and maxillofacial trauma. Given these advances, this systematic meta-analysis aims to assess the efficacy of navigational techniques in treating oral and maxillofacial trauma and explore the impact of machine learning on their management. Methods: A detailed and comprehensive analysis of studies published between January 2010 and September 2021 was conducted through a systematic meta-analysis. This included performing a thorough search of Web of Science, Embase, and PubMed databases to identify studies evaluating the efficacy of navigational techniques and the impact of machine learning in managing oral and maxillofacial trauma. Studies that did not meet established entry criteria were excluded. In addition, the overall quality of studies included was evaluated using Cochrane risk of bias tool and the Newcastle-Ottawa scale. Results: Total of 12 studies, including 869 patients with oral and maxillofacial trauma, met the inclusion criteria. An analysis of studies revealed that navigation techniques effectively improve surgical accuracy and minimize the risk of complications. Additionally, machine learning algorithms have proven effective in predicting treatment outcomes and identifying patients at high risk for complications. Conclusion: The introduction of navigational technology has great potential to improve surgical precision in oral and maxillofacial trauma treatment. Furthermore, developing machine learning algorithms offers opportunities to improve clinical decision-making and patient outcomes. Still, further studies are necessary to corroborate these results and establish the optimal use of these technologies in managing oral and maxillofacial trauma

Keywords: trauma, machine learning, navigation, maxillofacial, management

Procedia PDF Downloads 45
1539 Spectral Linewidth Measurement of Linear Frequency Modulated Continuous Wave Laser with Short Delay within the Coherence Length

Authors: Jongpil La, Jieun Choi

Abstract:

Optical frequency modulation technology for FMCW LiDAR based on Optical Phase Locked Loop(OPLL) configuration is addressed in this paper. The spectral linewidth measurement method of the linear frequency-modulated laser is also described. The single-frequency laser with narrow spectral linewidth is generated using an external cavity diode laser and the excitation frequency of the laser is adjusted by controlling the injection current of the laser. If the injection current of the laser is increased, the lasing frequency is decreased because of the slight increase in the refractive index of the laser gain chip. Dynamic optical frequency change rate is measured by using a Mach-Zehnder interferometer and compared with a proper reference signal. The phase difference between the reference signal and the measured signal using the Mach-Zehnder interferometer is obtained by mixing those two signals. The phase error is used to detect the frequency deviation error from the target value, which is then fed back to the driving current of the laser to compensate for it. The frequency sweep error from the ideal linear frequency waveform will degrade the spectral linewidth of the target spectrum and will degrade the maximum range performance of FMCW LiDAR. Therefore, the spectral linewidth measurement of frequency modulated laser is very important to evaluate the performance of the LiDAR system. However, it is impossible to apply the conventional self-homodyne or self-heterodyne method with a long delay line to evaluate the spectral linewidth of the frequency-modulated laser because the beat frequency generated by the long delay line is too high to measure with a high bandwidth frequency modulated laser. In this article, the spectral linewidth of the frequency-modulated laser is measured by using the newly proposed self-heterodyne method with a short delay line. The theoretical derivation for the proposed linewidth measurement method is provided in this article. The laser's spectral modulation bandwidth and linewidth are measured as 2.91GHz and 287kHz, respectively. LiDAR.

Keywords: FMCW, LiDAR, spectral linewidth, self-heterodyne

Procedia PDF Downloads 18
1538 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 102
1537 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 87
1536 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics

Authors: Mark Wiebe

Abstract:

One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.

Keywords: theology, philosophy of religion, metaphysics, philosophy

Procedia PDF Downloads 58
1535 Regional Problems of Electronic Governance in Autonomous Republic of Adjara

Authors: Manvelidze irakli, Iashvili Genadi

Abstract:

Research has shown that public institutions in Autonomous Republic of Ajara try their best to make their official electronic data (web-pages, social websites) more informative and improve them. Part of public institutions offer interesting electronic services and initiatives to the public although they are seldom used in communication process. The statistical analysis of the use of web-pages and social websites of public institutions for example their facebook page show lack of activity. The reason could be the fact that public institutions give people less possibility of interaction in official web-pages. Second reason could be the fact that these web-pages are less known to the public and the third reason could be the fact that heads of these institutions lack awareness about the necessity of strengthening citizens’ involvement. In order to increase people’s involvement in this process it is necessary to have at least 23 e-services in one web-page. The research has shown that 11 of the 16 public institutions have only 5 services which are contact, social networks and hotline. Besides introducing innovative services government institutions should evaluate them and make them popular and easily accessible for the public. It would be easy to solve this problem if public institutions had concrete strategic plan of public relations which involved matters connected with maximum usage of electronic services while interaction with citizens. For this moment only one governmental body has a functioning action plan of public relations. As a result of the research organizational, social, methodological and technical problems have been revealed. It should be considered that there are many feedback possibilities like forum, RSS, blogs, wiki, twitter, social networks, etc. usage of only one or three of such instruments indicate that there is no strategy of regional electronic governance. It is necessary to develop more mechanisms of feedback which will increase electronic interaction, discussions and it is necessary to introduce the service of online petitions. It is important to reduce the so-called “digital inequality” and increase internet access for the public. State actions should decrease such problems. In the end if such shortcomings will be improved the role of electronic interactions in democratic processes will increase.

Keywords: e-Government, electronic services, information technology, regional government, regional government

Procedia PDF Downloads 293
1534 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction

Authors: C. S. Subhashini, H. L. Premaratne

Abstract:

Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.

Keywords: landslides, influencing factors, neural network model, hidden markov model

Procedia PDF Downloads 365
1533 Enhancing Learning for Research Higher Degree Students

Authors: Jenny Hall, Alison Jaquet

Abstract:

Universities’ push toward the production of high quality research is not limited to academic staff and experienced researchers. In this environment of research rich agendas, Higher Degree Research (HDR) students are increasingly expected to engage in the publishing of good quality papers in high impact journals. IFN001: Advanced Information Research Skills (AIRS) is a credit bearing mandatory coursework requirement for Queensland University of Technology (QUT) doctorates. Since its inception in 1989, this unique blended learning program has provided the foundations for new researchers to produce original and innovative research. AIRS was redeveloped in 2012, and has now been evaluated with reference to the university’s strategic research priorities. Our research is the first comprehensive evaluation of the program from the learner perspective. We measured whether the program develops essential transferrable skills and graduate capabilities to ensure best practice in the areas of publishing and data management. In particular, we explored whether AIRS prepares students to be agile researchers with the skills to adapt to different research contexts both within and outside academia. The target group for our study consisted of HDR students and supervisors at QUT. Both quantitative and qualitative research methods were used for data collection. Gathering data was by survey and focus groups with qualitative responses analyzed using NVivo. The results of the survey show that 82% of students surveyed believe that AIRS assisted their research process and helped them learn skills they need as a researcher. The 18% of respondents who expressed reservation about the benefits of AIRS were also examined to determine the key areas of concern. These included trends related to the timing of the program early in the candidature and a belief among some students that their previous research experience was sufficient for postgraduate study. New insights have been gained into how to better support HDR learners in partnership with supervisors and how to enhance learning experiences of specific cohorts, including international students and mature learners.

Keywords: data management, enhancing learning experience, publishing, research higher degree students, doctoral students

Procedia PDF Downloads 264
1532 Field Study of Chlorinated Aliphatic Hydrocarbons Degradation in Contaminated Groundwater via Micron Zero-Valent Iron Coupled with Biostimulation

Authors: Naijin Wu, Peizhong Li, Haijian Wang, Wenxia Wei, Yun Song

Abstract:

Chlorinated aliphatic hydrocarbons (CAHs) pollution poses a severe threat to human health and is persistent in groundwater. Although chemical reduction or bioremediation is effective, it is still hard to achieve their complete and rapid dechlorination. Recently, the combination of zero-valent iron and biostimulation has been considered to be one of the most promising strategies, but field studies of this technology are scarce. In a typical site contaminated by various types of CAHs, basic physicochemical parameters of groundwater, CAHs and their product concentrations, and microbial abundance and diversity were monitored after a remediation slurry containing both micron zero-valent iron (mZVI) and biostimulation components were directly injected into the aquifer. Results showed that groundwater could form and keep low oxidation-reduction potential (ORP), a neutral pH, and anoxic conditions after different degrees of fluctuations, which was benefit for the reductive dechlorination of CAHs. The injection also caused an obvious increase in the total organic carbon (TOC) concentration and sulfate reduction. After 253 days post-injection, the mean concentration of total chlorinated ethylene (CEE) from two monitoring wells decreased from 304 μg/L to 8 μg/L, and total chlorinated ethane (CEA) decreased from 548 μg/L to 108 μg/L. Occurrence of chloroethane (CA) suggested that hydrogenolysis dechlorination was one of the main degradation pathways for CEA, and also hints that biological dechlorination was activated. A significant increase of ethylene at day 67 post-injection indicated that dechlorination was complete. Additionally, the total bacterial counts increased by 2-3 orders of magnitude after 253 days post-injection. And the microbial species richness decreased and gradually changed to anaerobic/fermentative bacteria. The relative abundance of potential degradation bacteria increased corresponding to the degradation of CAHs. This work demonstrates that mZVI and biostimulation can be combined to achieve the efficient removal of various CAHs from contaminated groundwater sources.

Keywords: chlorinated aliphatic hydrocarbons, groundwater, field study, zero-valent iron, biostimulation

Procedia PDF Downloads 145
1531 Wicking Bed Cultivation System as a Strategic Proposal for the Cultivation of Milpa and Mexican Medicinal Plants in Urban Spaces

Authors: David Lynch Steinicke, Citlali Aguilera Lira, Andrea León García

Abstract:

The proposal posed in this work comes from a researching-action approach. In Mexico, a dialogue of knowledge may function as a link between traditional, local, pragmatic knowledge, and technological, scientific knowledge. The advantage of generating this nexus lies on the positive impact in the environment, in society and economy. This work attempts to combine, on the one hand the traditional Mexican knowledge such as the usage of medicinal herb and the agroecosystem milpa; and on the other hand make use of a newly created agricultural ecotechnology which main function is to take advantage of the urban space and to save water. This ecotechnology is the wicking bed. In a globalized world, is relevant to have a proposal where the most important aspect is to revalorize the culture through the acquisition of traditional knowledge but at the same time adapting them to the new social and urbanized structures without threatening the environment. The methodology used in this work comes from a researching-action approach combined with a practical dimension where an experimental model made of three wickingbeds was implemented. In this model, there were cultivated medicinal herb and milpa components. The water efficiency and the social acceptance were compared with a traditional ground crop, all this practice was made in an urban social context. The implementation of agricultural ecotechnology has had great social acceptance as its irrigation involves minimal effort and it is economically feasible for low-income people. The wicking bed system raised in this project is attainable to be implemented in schools, urban and peri-urban environments, homemade gardens and public areas. The proposal managed to carry out an innovative and sustainable knowledge-based traditional Mexican agricultural technology, allowing regain Milpa agroecosystem in urban environments to strengthen food security in favour of nutritional and protein benefits for the Mexican fare.

Keywords: milpa, traditional medicine, urban agriculture, wicking bed

Procedia PDF Downloads 369
1530 Learning, Teaching and Assessing Students’ ESP Skills via Exe and Hot Potatoes Software Programs

Authors: Naira Poghosyan

Abstract:

In knowledge society the content of the studies, the methods used and the requirements for an educator’s professionalism regularly undergo certain changes. It follows that in knowledge society the aim of education is not only to educate professionals for a certain field but also to help students to be aware of cultural values, form human mutual relationship, collaborate, be open, adapt to the new situation, creatively express their ideas, accept responsibility and challenge. In this viewpoint, the development of communicative language competence requires a through coordinated approach to ensure proper comprehension and memorization of subject-specific words starting from high school level. On the other hand, ESP (English for Specific Purposes) teachers and practitioners are increasingly faced with the task of developing and exploiting new ways of assessing their learners’ literacy while learning and teaching ESP. The presentation will highlight the latest achievements in this field. The author will present some practical methodological issues and principles associated with learning, teaching and assessing ESP skills of the learners, using the two software programs of EXE 2.0 and Hot Potatoes 6. On the one hand the author will display the advantages of the two programs as self-learning and self-assessment interactive tools in the course of academic study and professional development of the CLIL learners, on the other hand, she will comprehensively shed light upon some methodological aspects of working out appropriate ways of selection, introduction, consolidation of subject specific materials via EXE 2.0 and Hot Potatoes 6. Then the author will go further to distinguish ESP courses by the general nature of the learners’ specialty identifying three large categories of EST (English for Science and Technology), EBE (English for Business and Economics) and ESS (English for the Social Sciences). The cornerstone of the presentation will be the introduction of the subject titled “The methodology of teaching ESP in non-linguistic institutions”, where a unique case of teaching ESP on Architecture and Construction via EXE 2.0 and Hot Potatoes 6 will be introduced, exemplifying how the introduction, consolidation and assessment can be used as a basis for feedback to the ESP learners in a particular professional field.

Keywords: ESP competences, ESP skill assessment/ self-assessment tool, eXe 2.0 / HotPotatoes software program, ESP teaching strategies and techniques

Procedia PDF Downloads 364
1529 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers

Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal

Abstract:

Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.

Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test

Procedia PDF Downloads 82
1528 Nuclear Near Misses and Their Learning for Healthcare

Authors: Nick Woodier, Iain Moppett

Abstract:

Background: It is estimated that one in ten patients admitted to hospital will suffer an adverse event in their care. While the majority of these will result in low harm, patients are being significantly harmed by the processes meant to help them. Healthcare, therefore, seeks to make improvements in patient safety by taking learning from other industries that are perceived to be more mature in their management of safety events. Of particular interest to healthcare are ‘near misses,’ those events that almost happened but for an intervention. Healthcare does not have any guidance as to how best to manage and learn from near misses to reduce the chances of harm to patients. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from the UK nuclear sector to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. The nuclear sector was chosen as an exemplar due to its status as an ultra-safe industry. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, scenario discussion, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how nuclear manages near misses with a focus on defining them and clarifying how best to support reporting and analysis to extract learning. Near misses related to radiation release or exposure were focused on. Results: Eightnuclear interviews contributed to the GT across nuclear power, decommissioning, weapons, and propulsion. The scoping review identified 83 articles across a range of safety-critical industries, with only six focused on nuclear. The GT identified that nuclear has a particular focus on precursors and low-level events, with regulation supporting their management. Exploration of definitions led to the recognition of the importance of several interventions in a sequence of events, but that do not solely rely on humans as these cannot be assumed to be robust barriers. Regarding reporting and analysis, no consistent methods were identified, but for learning, the role of operating experience learning groups was identified as an exemplar. The safety culture across nuclear, however, was heard to vary, which undermined reporting of near misses and other safety events. Some parts of the industry described that their focus on near misses is new and that despite potential risks existing, progress to mitigate hazards is slow. Conclusions: Healthcare often sees ‘nuclear,’ as well as other ultra-safe industries such as ‘aviation,’ as homogenous. However, the findings here suggest significant differences in safety culture and maturity across various parts of the nuclear sector. Healthcare can take learning from some aspects of management of near misses in nuclear, such as how they are defined and how learning is shared through operating experience networks. However, healthcare also needs to recognise that variability exists across industries, and comparably, it may be more mature in some areas of safety.

Keywords: culture, definitions, near miss, nuclear safety, patient safety

Procedia PDF Downloads 90
1527 Assessment of Obesity Parameters in Terms of Metabolic Age above and below Chronological Age in Adults

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Chronologic age (CA) of individuals is closely related to obesity and generally affects the magnitude of obesity parameters. On the other hand, close association between basal metabolic rate (BMR) and metabolic age (MA) is also a matter of concern. It is suggested that MA higher than CA is the indicator of the need to improve the metabolic rate. In this study, the aim was to assess some commonly used obesity parameters, such as obesity degree, visceral adiposity, BMR, BMR-to-weight ratio, in several groups with varying differences between MA and CA values. The study comprises adults, whose ages vary between 18 and 79 years. Four groups were constituted. Group 1, 2, 3 and 4 were composed of 55, 33, 76 and 47 adults, respectively. The individuals exhibiting -1, 0 and +1 for their MA-CA values were involved in Group 1, which was considered as the control group. Those, whose MA-CA values varying between -5 and -10 participated in Group 2. Those, whose MAs above their real ages were divided into two groups [Group 3 (MA-CA; from +5 to + 10) and Group 4 (MA-CA; from +11 to + 12)]. Body mass index (BMI) values were calculated. TANITA body composition monitor using bioelectrical impedance analysis technology was used to obtain values for obesity degree, visceral adiposity, BMR and BMR-to-weight ratio. The compiled data were evaluated statistically using a statistical package program; SPSS. Mean ± SD values were determined. Correlation analyses were performed. The statistical significance degree was accepted as p < 0.05. The increase in BMR was positively correlated with obesity degree. MAs and CAs of the groups were 39.9 ± 16.8 vs 39.9 ± 16.7 years for Group 1, 45.0 ± 15.3 vs 51.4 ± 15.7 years for Group 2, 47.2 ± 12.7 vs 40.0 ± 12.7 years for Group 3, and 53.6 ± 14.8 vs 42 ± 14.8 years for Group 4. BMI values of the groups were 24.3 ± 3.6 kg/m2, 23.2 ± 1.7 kg/m2, 30.3 ± 3.8 kg/m2, and 40.1 ± 5.1 kg/m2 for Group 1, 2, 3 and 4, respectively. Values obtained for BMR were 1599 ± 328 kcal in Group 1, 1463 ± 198 kcal in Group 2, 1652 ± 350 kcal in Group 3, and 1890 ± 360 kcal in Group 4. A correlation was observed between BMR and MA-CA values in Group 1. No correlation was detected in other groups. On the other hand, statistically significant correlations between MA-CA values and obesity degree, BMI as well as BMR/weight were found in Group 3 and in Group 4. It was concluded that upon consideration of these findings in terms of MA-CA values, BMR-to-weight ratio was found to be much more useful indicator of the severe increase in obesity development than BMR. Also, the lack of associations between MA and BMR as well as BMR-to-weight ratio emphasize the importance of consideration of MA-CA values rather than MA.

Keywords: basal metabolic rate, basal metabolic rate-to-weight-ratio, chronologic age, metabolic age, obesity degree

Procedia PDF Downloads 82
1526 An Investigation into the Potential of Industrial Low Grade Heat in Membrane Distillation for Freshwater Production

Authors: Yehia Manawi, Ahmad Kayvanifard

Abstract:

Membrane distillation is an emerging technology which has been used to produce freshwater and purify different types of aqueous mixtures. Qatar is an arid country where almost 100% of its freshwater demand is supplied through the energy-intensive thermal desalination process. The country’s need for water has reached an all-time high which stipulates finding an alternative way to augment freshwater without adding any drastic affect to the environment. The objective of this paper was to investigate the potential of using the industrial low grade waste heat to produce freshwater using membrane distillation. The main part of this work was conducting a heat audit on selected Qatari chemical industries to estimate the amounts of freshwater produced if such industrial waste heat were to be recovered. By the end of this work, the main objective was met and the heat audit conducted on the Qatari chemical industries enabled us to estimate both the amounts of waste heat which can be potentially recovered in addition to the amounts of freshwater which can be produced if such waste heat were to be recovered. By the end, the heat audit showed that around 605 Mega Watts of waste heat can be recovered from the studied Qatari chemical industries which resulted in a total daily production of 5078.7 cubic meter of freshwater. This water can be used in a wide variety of applications such as human consumption or industry. The amount of produced freshwater may look small when compared to that produced through thermal desalination plants; however, one must bear in mind that this water comes from waste and can be used to supply water for small cities or remote areas which are not connected to the water grid. The idea of producing freshwater from the two widely-available wastes (thermal rejected brine and waste heat) seems promising as less environmental and economic impacts will be associated with freshwater production which may in the near future augment the conventional way of producing freshwater currently being thermal desalination. This work has shown that low grade waste heat in the chemical industries in Qatar and perhaps the rest of the world can contribute to additional production of freshwater using membrane distillation without significantly adding to the environmental impact.

Keywords: membrane distillation, desalination, heat recovery, environment

Procedia PDF Downloads 306
1525 Impure Water, a Future Disaster: A Case Study of Lahore Ground Water Quality with GIS Techniques

Authors: Rana Waqar Aslam, Urooj Saeed, Hammad Mehmood, Hameed Ullah, Imtiaz Younas

Abstract:

This research has been conducted to assess the water quality in and around Lahore Metropolitan area on the basis of three different land uses, i.e. residential, commercial, and industrial land uses. For this, 29 sample sites have been selected on the basis of simple random sampling technique. Samples were collected at the source (WASA tube wells). The criteria for selecting sample sites are to have a maximum concentration of population in the selected land uses. The results showed that in the residential land use the proportion of nitrate and turbidity is at their highest level in the areas of Allama Iqbal Town and Samanabad Town. Commercial land use of Gulberg and Data Gunj Bakhsh Town have highest level of proportion of chlorides, calcium, TDS, pH, Mg, total hardness, arsenic and alkalinity. Whereas in industrial type of land use in Ravi and Wahga Town have the proportion of arsenic, Mg, nitrate, pH, and turbidity are at their highest level. The high rate of concentration of these parameters in these areas is basically due to the old and fractured pipelines that allow bacterial as well as physiochemical contaminants to contaminate the portable water at the sources. Furthermore, it is seen in most areas that waste water from domestic, industrial, as well as municipal sources may get easy discharge into open spaces and water bodies, like, cannels, rivers, lakes that seeps and become a part of ground water. In addition, huge dumps located in Lahore are becoming the cause of ground water contamination as when the rain falls, the water gets seep into the ground and impures the ground water quality. On the basis of the derived results with the help of Geo-spatial technology ACRGIS 9.3 Interpolation (IDW), it is recommended that water filtration plants must be installed with specific parameter control. A separate team for proper inspection has to be made for water quality check at the source. Old water pipelines must be replaced with the new pipelines, and safe water depth must be ensured at the source end.

Keywords: GIS, remote sensing, pH, nitrate, disaster, IDW

Procedia PDF Downloads 207
1524 Graphic Narratives: Representations of Refugeehood in the Form of Illustration

Authors: Pauline Blanchet

Abstract:

In a world where images are a prominent part of our daily lives and a way of absorbing information, the analysis of the representation of migration narratives is vital. This thesis raises questions concerning the power of illustrations, drawings and visual culture in order to represent the migration narratives in the age of Instagram. The rise of graphic novels and comics has come about in the last fifteen years, specifically regarding contemporary authors engaging with complex social issues such as migration and refugeehood. Due to this, refugee subjects are often in these narratives, whether they are autobiographical stories or whether the subject is included in the creative process. Growth in discourse around migration has been present in other art forms; in 2018, there has been dedicated exhibitions around migration such as Tania Bruguera at the TATE (2018-2019), ‘Journeys Drawn’ at the House of Illustration (2018-2019) and dedicated film festivals (2018; the Migration Film Festival), which have shown the recent considerations of using the arts as a medium of expression regarding themes of refugeehood and migration. Graphic visuals are fast becoming a key instrument when representing migration, and the central thesis of this paper is to show the strength and limitations of this form as well the methodology used by the actors in the production process. Recent works which have been released in the last ten years have not being analysed in the same context as previous graphic novels such as Palestine and Persepolis. While a lot of research has been done on the mass media portrayals of refugees in photography and journalism, there is a lack of literature on the representation with illustrations. There is little research about the accessibility of graphic novels such as where they can be found and what the intentions are when writing the novels. It is interesting to see why these authors, NGOs, and curators have decided to highlight these migrant narratives in a time when the mainstream media has done extensive coverage on the ‘refugee crisis’. Using primary data by doing one on one interviews with artists, curators, and NGOs, this paper investigates the efficiency of graphic novels for depicting refugee stories as a viable alternative to other mass medium forms. The paper has been divided into two distinct sections. The first part is concerned with the form of the comic itself and how it either limits or strengthens the representation of migrant narratives. This will involve analysing the layered and complex forms that comics allow such as multimedia pieces, use of photography and forms of symbolism. It will also show how the illustration allows for anonymity of refugees, the empathetic aspect of the form and how the history of the graphic novel form has allowed space for positive representations of women in the last decade. The second section will analyse the creative and methodological process which takes place by the actors and their involvement with the production of the works.

Keywords: graphic novel, refugee, communication, media, migration

Procedia PDF Downloads 101
1523 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 264