Search results for: regular network d-dimensional
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5764

Search results for: regular network d-dimensional

1084 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 275
1083 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 228
1082 Neural Graph Matching for Modification Similarity Applied to Electronic Document Comparison

Authors: Po-Fang Hsu, Chiching Wei

Abstract:

In this paper, we present a novel neural graph matching approach applied to document comparison. Document comparison is a common task in the legal and financial industries. In some cases, the most important differences may be the addition or omission of words, sentences, clauses, or paragraphs. However, it is a challenging task without recording or tracing the whole edited process. Under many temporal uncertainties, we explore the potentiality of our approach to proximate the accurate comparison to make sure which element blocks have a relation of edition with others. In the beginning, we apply a document layout analysis that combines traditional and modern technics to segment layouts in blocks of various types appropriately. Then we transform this issue into a problem of layout graph matching with textual awareness. Regarding graph matching, it is a long-studied problem with a broad range of applications. However, different from previous works focusing on visual images or structural layout, we also bring textual features into our model for adapting this domain. Specifically, based on the electronic document, we introduce an encoder to deal with the visual presentation decoding from PDF. Additionally, because the modifications can cause the inconsistency of document layout analysis between modified documents and the blocks can be merged and split, Sinkhorn divergence is adopted in our neural graph approach, which tries to overcome both these issues with many-to-many block matching. We demonstrate this on two categories of layouts, as follows., legal agreement and scientific articles, collected from our real-case datasets.

Keywords: document comparison, graph matching, graph neural network, modification similarity, multi-modal

Procedia PDF Downloads 171
1081 Moho Undulations beneath South of Egypt, Using the Seismic Waves Generated by Tele Earthquakes

Authors: Ahmed Hosny, Haroon Elshaikh, Gaber Hassib, Yassin Ali

Abstract:

The Moho discontinuity undulations beneath the southern part of Egypt have been defined using the seismic waves generated by tele earthquakes. These earthquakes have been recorded by the Aswan seismic network, which consists of 10 seismic stations established around the lake of Nasser. An additional seismic station was located towards the east of the Lake of Nasser by about ~ 150 km. Receiver functions and H-k stacking methods were used for obtaining the depths of Moho discontinuity and the Vp/Vs ratios beneath each seismic station. Our results revealed that, the depths of Moho discontinuity beneath the stations located around the Lake of Nasser range from 36 to 39 km, with an average value of 37.5 km. These results are consistent with the previous works done on the same area. The obtained Vp/Vs ratios for the crust of this area were ranged from 1.73 to 1.86, with an average value of 1.79. While beneath the station located towards the east, the Moho discontinuity was detected at a shallowest depth of 27 km and the Vp/Vs ratio was 1.82. The difference in the Moho depths beneath the stations located around the Lake of Nasser and the station located to the east revealed the boundary position between the Saharan Metacraton to the west and the Nubian-Arabian Shield to the east. This structural boundary delineates the position of the old collision of the Oceanic crust of the Nubian-Arabian Shield to the east with the Continental crust of the Saharan Metacraton to the west.

Keywords: Moho undulations, south of Egypt, seismic waves, earthquakes

Procedia PDF Downloads 504
1080 Electromagnetic Interface Shielding of Graphene Oxide–Carbon Nanotube Hybrid ABS Composites

Authors: Jeevan Jyoti, Bhanu Pratap Singh, S. R. Dhakate

Abstract:

In the present study, multiwalled carbon nanotubes (MWCNTs) and reduced graphene oxide (RGO) were synthesized by chemical vapor deposition and Improved Hummer’s method, respectively and their composite with acrylonitrile butadiene styrene (ABS) were prepared by twin screw co rotating extrusion technique. The electromagnetic interference (EMI) shielding effectiveness of graphene oxide carbon nanotube (GCNTs) hybrid composites was investigated and the results were compared with EMI shielding of carbon nanotube (CNTs) and reduced graphene oxide (RGO) in the frequency range of 12.4-18 GHz (Ku-band). The experimental results indicate that the EMI shielding effectiveness of these composites is achieved up to –21 dB for 10 wt. % loading of GCNT loading. The mechanism of improvement in EMI shielding effectiveness is discussed by resolving their contribution in absorption and reflection loss. The main reason for such a high improved shielding effectiveness has been attributed to the significant improvement in the electrical conductivity of the composites. The electrical conductivity of these GCNT/ABS composites was increased from 10-13 S/cm to 10-7 S/cm showing the improvement of the 6 order of the magnitude. Scanning electron microscopic (SEM) and high resolution transmission electron microscopic (HRTEM) studies showed that the GCNTs were uniformly dispersed in the ABS polymer matrix. GCNTs form a network throughout the polymer matrix and promote the reinforcement.

Keywords: ABS, EMI shielding, multiwalled carbon nanotubes, reduced graphene oxide, graphene, oxide-carbon nanotube (GCNTs), twin screw extruder, multiwall carbon nanotube, electrical conductivity

Procedia PDF Downloads 351
1079 High-Resolution Spatiotemporal Retrievals of Aerosol Optical Depth from Geostationary Satellite Using Sara Algorithm

Authors: Muhammad Bilal, Zhongfeng Qiu

Abstract:

Aerosols, suspended particles in the atmosphere, play an important role in the earth energy budget, climate change, degradation of atmospheric visibility, urban air quality, and human health. To fully understand aerosol effects, retrieval of aerosol optical properties such as aerosol optical depth (AOD) at high spatiotemporal resolution is required. Therefore, in the present study, hourly AOD observations at 500 m resolution were retrieved from the geostationary ocean color imager (GOCI) using the simplified aerosol retrieval algorithm (SARA) over the urban area of Beijing for the year 2016. The SARA requires top-of-the-atmosphere (TOA) reflectance, solar and sensor geometry information and surface reflectance observations to retrieve an accurate AOD. For validation of the GOCI retrieved AOD, AOD measurements were obtained from the aerosol robotic network (AERONET) version 3 level 2.0 (cloud-screened and quality assured) data. The errors and uncertainties were reported using the root mean square error (RMSE), relative percent mean error (RPME), and the expected error (EE = ± (0.05 + 0.15AOD). Results showed that the high spatiotemporal GOCI AOD observations were well correlated with the AERONET AOD measurements with a correlation coefficient (R) of 0.92, RMSE of 0.07, and RPME of 5%, and 90% of the observations were within the EE. The results suggested that the SARA is robust and has the ability to retrieve high-resolution spatiotemporal AOD observations over the urban area using the geostationary satellite.

Keywords: AEORNET, AOD, SARA, GOCI, Beijing

Procedia PDF Downloads 163
1078 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland

Authors: Alireza Ansariyar, Safieh Laaly

Abstract:

Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates CAVs’ fuel consumption and air pollutants (C.O., PM, and NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.

Keywords: connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models

Procedia PDF Downloads 437
1077 Comparative Fragility Analysis of Shallow Tunnels Subjected to Seismic and Blast Loads

Authors: Siti Khadijah Che Osmi, Mohammed Ahmad Syed

Abstract:

Underground structures are crucial components which required detailed analysis and design. Tunnels, for instance, are massively constructed as transportation infrastructures and utilities network especially in urban environments. Considering their prime importance to the economy and public safety that cannot be compromised, thus any instability to these tunnels will be highly detrimental to their performance. Recent experience suggests that tunnels become vulnerable during earthquakes and blast scenarios. However, a very limited amount of studies has been carried out to study and understanding the dynamic response and performance of underground tunnels under those unpredictable extreme hazards. In view of the importance of enhancing the resilience of these structures, the overall aims of the study are to evaluate probabilistic future performance of shallow tunnels subjected to seismic and blast loads by developing detailed fragility analysis. Critical non-linear time history numerical analyses using sophisticated finite element software Midas GTS NX have been presented about the current methods of analysis, taking into consideration of structural typology, ground motion and explosive characteristics, effect of soil conditions and other associated uncertainties on the tunnel integrity which may ultimately lead to the catastrophic failure of the structures. The proposed fragility curves for both extreme loadings are discussed and compared which provide significant information the performance of the tunnel under extreme hazards which may beneficial for future risk assessment and loss estimation.

Keywords: fragility analysis, seismic loads, shallow tunnels, blast loads

Procedia PDF Downloads 335
1076 Construction Unit Rate Factor Modelling Using Neural Networks

Authors: Balimu Mwiya, Mundia Muya, Chabota Kaliba, Peter Mukalula

Abstract:

Factors affecting construction unit cost vary depending on a country’s political, economic, social and technological inclinations. Factors affecting construction costs have been studied from various perspectives. Analysis of cost factors requires an appreciation of a country’s practices. Identified cost factors provide an indication of a country’s construction economic strata. The purpose of this paper is to identify the essential factors that affect unit cost estimation and their breakdown using artificial neural networks. Twenty-five (25) identified cost factors in road construction were subjected to a questionnaire survey and employing SPSS factor analysis the factors were reduced to eight. The 8 factors were analysed using the neural network (NN) to determine the proportionate breakdown of the cost factors in a given construction unit rate. NN predicted that political environment accounted 44% of the unit rate followed by contractor capacity at 22% and financial delays, project feasibility, overhead and profit each at 11%. Project location, material availability and corruption perception index had minimal impact on the unit cost from the training data provided. Quantified cost factors can be incorporated in unit cost estimation models (UCEM) to produce more accurate estimates. This can create improvements in the cost estimation of infrastructure projects and establish a benchmark standard to assist the process of alignment of work practises and training of new staff, permitting the on-going development of best practises in cost estimation to become more effective.

Keywords: construction cost factors, neural networks, roadworks, Zambian construction industry

Procedia PDF Downloads 357
1075 Adapting Hazard Analysis and Critical Control Points (HACCP) Principles to Continuing Professional Education

Authors: Yaroslav Pavlov

Abstract:

In the modern world, ensuring quality has become increasingly important in various fields of human activity. One universal approach to quality management, proven effective in the food industry, is the HACCP (Hazard Analysis and Critical Control Points) concept. Based on principles of preventing potential hazards to consumers at all stages of production, from raw materials to the final product, HACCP offers a systematic approach to identifying, assessing risks, and managing critical control points (CCPs). Initially used primarily for food production, it was later effectively adapted to the food service sector. Implementing HACCP provides organizations with a reliable foundation for improving food safety, covering all links in the food chain from producer to consumer, making it an integral part of modern quality management systems. The main principles of HACCP—hazard identification, CCP determination, effective monitoring procedures, corrective actions, regular checks, and documentation—are universal and can be adapted to other areas. The adaptation of the HACCP concept is relevant for continuing professional education (CPE) with certain reservations. Specifically, it is reasonable to abandon the term ‘hazards’ as deviations in CCPs do not pose dangers, unlike in food production. However, the approach through CCP analysis and the use of HACCP's main principles for educational services are promising. This is primarily because it allows for identifying key CCPs based on the value creation model of a specific educational organization and consequently focusing efforts on specific CCPs to manage the quality of educational services. This methodology can be called the Analysis of Critical Points in Educational Services (ACPES). ACPES offers a similar approach to managing the quality of educational services, focusing on preventing and eliminating potential risks that could negatively impact the educational process, learners' achievement of set educational goals, and ultimately lead to students rejecting the organization's educational services. ACPES adapts proven HACCP principles to educational services, enhancing quality management effectiveness and student satisfaction. ACPES includes identifying potential problems at all stages of the educational process, from initial interest to graduation and career development. In ACPES, the term "hazards" is replaced with "problematic areas," reflecting the specific nature of the educational environment. Special attention is paid to determining CCPs—stages where corrective measures can most effectively prevent or minimize the risk of failing educational goals. The ACPES principles align with HACCP's principles, adjusted for the specificities of CPE. The method of the learner's journey map (variation of Customer Journey Map, CJM) can be used to overcome the complexity of formalizing the production chain in educational services. CJM provides a comprehensive understanding of the learner's experience at each stage, facilitating targeted and effective quality management. Thus, integrating the learner's journey map into ACPES represents a significant extension of the methodology's capabilities, ensuring a comprehensive understanding of the educational process and forming an effective quality management system focused on meeting learners' needs and expectations.

Keywords: quality management, continuing professional education, customer journey map, HACCP

Procedia PDF Downloads 24
1074 How Can Personal Protective Equipment Be Best Used and Reused: A Human Factors based Look at Donning and Doffing Procedures

Authors: Devin Doos, Ashley Hughes, Trang Pham, Paul Barach, Rami Ahmed

Abstract:

Over 115,000 Health Care Workers (HCWs) have died from COVID-19, and millions have been infected while caring for patients. HCWs have filed thousands of safety complaints surrounding safety concerns due to Personal Protective Equipment (PPE) shortages, which included concerns around inadequate and PPE reuse. Protocols for donning and doffing PPE remain ambiguous, lacking an evidence-base, and often result in wide deviations in practice. PPE donning and doffing protocol deviations commonly result in self-contamination but have not been thoroughly addressed. No evidence-driven protocols provide guidance on protecting HCW during periods of PPE reuse. Objective: The aim of this study was to examine safety-related threats and risks to Health Care Workers (HCWs) due to the reuse of PPE among Emergency Department personnel. Method: We conducted a prospective observational study to examine the risks of reusing PPE. First, ED personnel were asked to don and doff PPE in a simulation lab. Each participant was asked to don and doff PPE five times, according to the maximum reuse recommendation set by the Centers for Disease Control and Prevention (CDC). Each participant was videorecorded; video recordings were reviewed and coded independently by at least 2 of the 3trained coders for safety behaviors and riskiness of actions. A third coder was brought in when the agreement between the 2 coders could not be reached. Agreement between coders was high (81.9%), and all disagreements (100%) were resolved via consensus. A bowtie risk assessment chart was constructed analyzing the factors that contribute to increased risks HCW are faced with due to PPE use and reuse. Agreement amongst content experts in the field of Emergency Medicine, Human Factors, and Anesthesiology was used to select aspects of health care that both contribute and mitigate risks associated with PPE reuse. Findings: Twenty-eight clinician participants completed five rounds of donning/doffing PPE, yielding 140 PPE donning/doffing sequences. Two emerging threats were associated with behaviors in donning, doffing, and re-using PPE: (i) direct exposure to contaminant, and (ii) transmission/spread of contaminant. Protective behaviors included: hand hygiene, not touching the patient-facing surface of PPE, and ensuring a proper fit and closure of all PPE materials. 100% of participants (n= 28) deviated from the CDC recommended order, and most participants (92.85%, n=26) self-contaminated at least once during reuse. Other frequent errors included failure to tie all ties on the PPE (92.85%, n=26) and failure to wash hands after a contamination event occurred (39.28%, n=11). Conclusions: There is wide variation and regular errors in how HCW don and doffPPE while including in reusing PPE that led to self-contamination. Some errors were deemed “recoverable”, such as hand washing after touching a patient-facing surface to remove the contaminant. Other errors, such as using a contaminated mask and accidentally spreading to the neck and face, can lead to compound risks that are unique to repeated PPE use. A more comprehensive understanding of the contributing threats to HCW safety and complete approach to mitigating underlying risks, including visualizing with risk management toolsmay, aid future PPE designand workflow and space solutions.

Keywords: bowtie analysis, health care, PPE reuse, risk management

Procedia PDF Downloads 85
1073 Removal of Pharmaceuticals from Aquarius Solutions Using Hybrid Ceramic Membranes

Authors: Jenny Radeva, Anke-Gundula Roth, Christian Goebbert, Robert Niestroj-Pahl, Lars Daehne, Axel Wolfram, Juergen Wiese

Abstract:

The technological advantages of ceramic filtration elements were combined with polyelectrolyte films in the development process of hybrid membrane for the elimination of pharmaceuticals from Aquarius solutions. Previously extruded alumina ceramic membranes were coated with nanosized polyelectrolyte films using Layer-by-Layer technology. The polyelectrolyte chains form a network with nano-pores on the ceramic surface and promote the retention of small molecules like pharmaceuticals and microplastics, which cannot be eliminated using standard ultrafiltration methods. Additionally, the polyelectrolyte coat contributes with its adjustable (based on application) Zeta Potential for repulsion of contaminant molecules with opposite charges. Properties like permeability, bubble point, pore size distribution and Zeta Potential of ceramic and hybrid membranes were characterized using various laboratory and pilot tests and compared with each other. The most significant role for the membrane characterization played the filtration behavior investigation, during which retention against widely used pharmaceuticals like Diclofenac, Ibuprofen and Sulfamethoxazol was subjected to series of filtration tests. The presented study offers a new perspective on nanosized molecules removal from aqueous solutions and shows the importance of combined techniques application for the elimination of pharmaceutical contaminants from drinking water.

Keywords: water treatment, hybrid membranes, layer-by-layer coating, filtration, polyelectrolytes

Procedia PDF Downloads 160
1072 Identifying Enablers and Barriers of Healthcare Knowledge Transfer: A Systematic Review

Authors: Yousuf Nasser Al Khamisi

Abstract:

Purpose: This paper presents a Knowledge Transfer (KT) Framework in healthcare sectors by applying a systematic literature review process to the healthcare organizations domain to identify enablers and barriers of KT in Healthcare. Methods: The paper conducted a systematic literature search of peer-reviewed papers that described key elements of KT using four databases (Medline, Cinahl, Scopus, and Proquest) for a 10-year period (1/1/2008–16/10/2017). The results of the literature review were used to build a conceptual framework of KT in healthcare organizations. The author used a systematic review of the literature, as described by Barbara Kitchenham in Procedures for Performing Systematic Reviews. Findings: The paper highlighted the impacts of using Knowledge Management (KM) concept at a healthcare organization in controlling infectious diseases in hospitals, improving family medicine performance and enhancing quality improvement practices. Moreover, it found that good-coding performance is analytically linked with a knowledge sharing network structure rich in brokerage and hierarchy rather than in density. The unavailability or ignored of the latest evidence on more cost-effective or more efficient delivery approaches leads to increase the healthcare costs and may lead to unintended results. Originality: Search procedure produced 12,093 results, of which 3523 were general articles about KM and KT. The titles and abstracts of these articles had been screened to segregate what is related and what is not. 94 articles identified by the researchers for full-text assessment. The total number of eligible articles after removing un-related articles was 22 articles.

Keywords: healthcare organisation, knowledge management, knowledge transfer, KT framework

Procedia PDF Downloads 134
1071 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 146
1070 Effect of Coated Sodium Butyrate (CM3000®) On Zootechnical Performance, Immune Status and Necrotic Enteritis After Experimental Infection of Broiler Chickens

Authors: Mohamed Ahmed Tony, Mohamed Hamoud

Abstract:

The present study was conducted to determine the effect of commercially coated slow-release sodium butyrate (CM3000®) as a feed additive on zootechnical performance, immune status and Clostridium perfringens severity after experimental infection. Three hundred 1-d-old broiler chicks (Cobb 500) were randomly distributed into 3 treatment groups (4 replicates each) using 25 chicks per replicate on floor pens. Control (C) birds were offered non-supplemented basal diets. Treatments 1 and 2 (T1 and T2) were fed diets containing CM3000® at 300 and 500 g/ton feed, respectively, during the entire experimental period (35 days). Feed and water were offered ad-libitum. Feed consumption and body weight were recorded weekly to calculate body weight gain and feed conversion. Blood samples were collected to evaluate the immune status of the birds against Newcastle disease vaccines using HI test. At the end of the experimental period, 20 birds were chosen randomly from each group (5 birds from each pen) to compare carcass yield. At day 16 of age 20 birds from each group (5 birds/replicate) were bacteriologically examined and proved to be free from Clostridium perfringens. The isolated birds were challenged orally with 1 ml buffer containing 106 CFU/ml Clostridium perfringens local isolate and prepared from necrotic enteritis (NE) diseased farms. Birds were observed on a regular basis daily for any signs of NE. Birds that died in the challenged group were necropsied to determine the cause of death. On day 28 of age, the surviving chickens were killed by cervical dislocation and necropsied immediately. Intestinal tracts were removed and intestinal lesions were scored. Tissue samples of the duodenum, jejunum, ileum and cecum for histopathological examination were collected. All collected data were statistically analyzed using IBM SPSS® version 19 software for personal computers. Means were compared by one-way ANOVA (P<0.05) followed by the Duncan Post Hoc test. The results revealed that body weight gain was significantly (P<0.05) improved in chicks fed on both doses of CM3000® compared to the control one. Final body weight gain in T1 and T2 were 2064.94 and 2141.37 g/bird, respectively, while in the control group, the weight gain showed 1952.78 g/bird. In addition, supplementation of diets with CM3000® increased significantly feed intake (P<0.05). Total feed intake in T1 and T2 were 3186.32 and 3273.29 g/bird, respectively; however, feed intake in the control group recorded 3081.95 g/bird. The best feed conversion was recorded in T2 group (1.53). Feed conversion in the control and T1 groups were 1.58 and 1.54, respectively. Dressing percentage, liver weights and the other carcasses yields were not different between treatments. The butyrate significantly enhanced immune responses measured against Newcastle disease vaccines. Sodium butyrate significantly reduced NE lesions and healthy improved the intestinal tissues in the samples collected from T1 and T2-challenged chickens versus those collected from the control group. In conclusion, exogenous administration of slow-release butyrate (CM3000®) is capable of improving performance, enhancing immunity and NE disease resistance in broiler chickens.

Keywords: sodium butyrate, broiler chicken, zootechnical performance, immunity, necrotic enteritis

Procedia PDF Downloads 79
1069 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran

Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard

Abstract:

Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.

Keywords: data mining, ischemic stroke, decision tree, Bayesian network

Procedia PDF Downloads 165
1068 A Historical Overview and Supplementation of the Dyad Concept of Industrial Marketing

Authors: Kimmo J. Kurppa

Abstract:

This paper describes the development of the buyer-supplier dyad concept over the years and proposes improvements, clarifications and extensions to the prevailing definitions published in 1970’s and 1980’s. This paper suggests a partition of the buyer-supplier dyad to concepts of Commercial Dyad (dyadic interaction in vertical relationships) and Innovative Dyad (dyadic interaction in horizontal relationship) since dyadic interaction takes place in two major types of contexts between industrial firms. Especially the context of joint product development in a dyadic relationship has not been adequately recognized being totally different from the interaction taking place in commercial buyer-supplier interaction. This paper provides therefore a solution to the existing gap in research by clarifying the descriptions and the context where dyadic interaction takes place between industrial firms. This paper also illustrates and explains how the firm’s organization and the interaction taking place inside it, is connected to the dyadic interaction structure between the firm and its partner firm. This theme has been discussed earlier but the phenomenon has not been adequately described and has not been illustrated in earlier research. This conceptual study has been interested in how the dyad concept of Industrial Marketing has been defined in the earlier research and how the definition could be improved. This conceptual paper has been constructed by using the systematic review methodology and proposes avenues for future research. The concept and existence of relationship and interaction between firm’s internal interaction network and external interaction between firm’s dyadic counterparts, need to be verified through empirical research.

Keywords: dyadic interaction, industrial dyad, buyer-supplier relationship, strategic reciprocity, experience, socially adjusted opportunism

Procedia PDF Downloads 210
1067 Photocatalysis with Fe/Ti-Pillared Clays for the Oxofunctionalization of Alkylaromatics by O2

Authors: Houria Rezala, Jose Luis Valverde, Amaya Romero, Alessandra Molinari, Andrea Maldotti

Abstract:

A pillared montmorillonite containing iron doped titania (Fe/Ti-PILC) has been prepared from a natural clay. This material has been characterized by X-ray diffraction, nitrogen adsorption, temperature programmed desorption of ammonia, inductively coupled plasma atomic emission spectroscopy, atomic absorption, and diffuse reflectance UV-VIS spectroscopy. The layer structure of Fe/Ti-PILC resulted to be ordered with an insertion of pillars, which caused a slight increase in the basal spacing of the clay. Its specific surface area was about three times larger than that of the parent Na-montmorillonite due principally to the creation of a remarkable microporous network. The doped material was a robust photocatalyst able to oxidize liquid alkyl aromatics to the corresponding carbonylic derivatives, using O2 as the oxidizing species, at mild pressure and temperature conditions. Accumulation of valuable carbonylic derivatives was possible since their over-oxidation to carbon dioxide was negligible. Fe/Ti-PILC was able to discriminate between toluene and cyclohexane in favor of the aromatic compound with an efficiency that is about three times higher than that of titanium pillared clays (Ti-PILC). It is likely that the addition of iron favored the formation of new acid sites able to interact with the aromatic substrate. Iron doping caused a significant TiO2 visible light-induced activity (wavelength > 400 nm) with only minor negative effects on its performance under UV-light irradiation (wavelength > 290 nm).

Keywords: alkyl aromatics oxidation, heterogeneous photocatalysis, iron doping, pillared clays

Procedia PDF Downloads 442
1066 Sexting Phenomenon in Educational Settings: A Data Mining Approach

Authors: Koutsopoulou Ioanna, Gkintoni Evgenia, Halkiopoulos Constantinos, Antonopoulou Hera

Abstract:

Recent advances in Internet Computer Technology (ICT) and the ever-increasing use of technological equipment amongst adolescents and young adults along with unattended access to the internet and social media and uncontrolled use of smart phones and PCs have caused social problems like sexting to emerge. The main purpose of the present article is first to present an analytic theoretical framework of sexting as a recent social phenomenon based on studies that have been conducted the last decade or so; and second to investigate Greek students’ and also social network users, sexting perceptions and to record how often social media users exchange sexual messages and to retrace demographic variables predictors. Data from 1,000 students were collected and analyzed and all statistical analysis was done by the software package WEKA. The results indicate among others, that the use of data mining methods is an important tool to draw conclusions that could affect decision and policy making especially in the field and related social topics of educational psychology. To sum up, sexting lurks many risks for adolescents and young adults students in Greece and needs to be better addressed in relevance to the stakeholders as well as society in general. Furthermore, policy makers, legislation makers and authorities will have to take action to protect minors. Prevention strategies based on Greek cultural specificities are being proposed. This social problem has raised concerns in recent years and will most likely escalate concerns in global communities in the future.

Keywords: educational ethics, sexting, Greek sexters, sex education, data mining

Procedia PDF Downloads 180
1065 Observation on the Performance of Heritage Structures in Kathmandu Valley, Nepal during the 2015 Gorkha Earthquake

Authors: K. C. Apil, Keshab Sharma, Bigul Pokharel

Abstract:

Kathmandu Valley, capital city of Nepal houses numerous historical monuments as well as religious structures which are as old as from the 4th century A.D. The city alone is home to seven UNESCO’s world heritage sites including various public squares and religious sanctums which are often regarded as living heritages by various historians and archeological explorers. Recently on April 25, 2015, the capital city including other nearby locations was struck with Gorkha earthquake of moment magnitude (Mw) 7.8, followed by the strongest aftershock of moment magnitude (Mw) 7.3 on May 12. This study reports structural failures and collapse of heritage structures in Kathmandu Valley during the earthquake and presents preliminary findings as to the causes of failures and collapses. Field reconnaissance was carried immediately after the main shock and the aftershock, in major heritage sites: UNESCO world heritage sites, a number of temples and historic buildings in Kathmandu Durbar Square, Patan Durbar Square, and Bhaktapur Durbar Square. Despite such catastrophe, a significant number of heritage structures stood high, performing very well during the earthquake. Preliminary reports from archeological department suggest that 721 of such structures were severely affected, whereas numbers within the valley only were 444 including 76 structures which were completely collapsed. This study presents recorded accelerograms and geology of Kathmandu Valley. Structural typology and architecture of the heritage structures in Kathmandu Valley are briefly described. Case histories of damaged heritage structures, the patterns, and the failure mechanisms are also discussed in this paper. It was observed that performance of heritage structures was influenced by the multiple factors such as structural and architecture typology, configuration, and structural deficiency, local ground site effects and ground motion characteristics, age and maintenance level, material quality etc. Most of such heritage structures are of masonry type using bricks and earth-mortar as a bonding agent. The walls' resistance is mainly compressive, thus capable of withstanding vertical static gravitational load but not horizontal dynamic seismic load. There was no definitive pattern of damage to heritage structures as most of them behaved as a composite structure. Some structures were extensively damaged in some locations, while structures with similar configuration at nearby location had little or no damage. Out of major heritage structures, Dome, Pagoda (2, 3 or 5 tiered temples) and Shikhara structures were studied with similar variables. Studying varying degrees of damages in such structures, it was found that Shikhara structures were most vulnerable one where Dome structures were found to be the most stable one, followed by Pagoda structures. The seismic performance of the masonry-timber and stone masonry structures were slightly better than that of the masonry structures. Regular maintenance and periodic seismic retrofitting seems to have played pivotal role in strengthening seismic performance of the structure. The study also recommends some key functions to strengthen the seismic performance of such structures through study based on structural analysis, building material behavior and retrofitting details. The result also recognises the importance of documentation of traditional knowledge and its revised transformation in modern technology.

Keywords: Gorkha earthquake, field observation, heritage structure, seismic performance, masonry building

Procedia PDF Downloads 143
1064 Selective Oxidation of 6Mn-2Si Advanced High Strength Steels during Intercritical Annealing Treatment

Authors: Maedeh Pourmajidian, Joseph R. McDermid

Abstract:

Advanced High Strength Steels are revolutionizing both the steel and automotive industries due to their high specific strength and ability to absorb energy during crash events. This allows manufacturers to design vehicles with significantly increased fuel efficiency without compromising passenger safety. To maintain the structural integrity of the fabricated parts, they must be protected from corrosion damage through continuous hot-dip galvanizing process, which is challenging due to selective oxidation of Mn and Si on the surface of this AHSSs. The effects of process atmosphere oxygen partial pressure and small additions of Sn on the selective oxidation of a medium-Mn C-6Mn-2Si advanced high strength steel was investigated. Intercritical annealing heat treatments were carried out at 690˚C in an N2-5%H2 process atmosphere under dew points ranging from –50˚C to +5˚C. Surface oxide chemistries, morphologies, and thicknesses were determined at a variety of length scales by several techniques, including SEM, TEM+EELS, and XPS. TEM observations of the sample cross-sections revealed the transition to internal oxidation at the +5˚C dew point. EELS results suggested that the internal oxides network was composed of a multi-layer oxide structure with varying chemistry from oxide core towards the outer part. The combined effect of employing a known surface active element as a function of process atmosphere on the surface structure development and the possible impact on reactive wetting of the steel substrates by the continuous galvanizing zinc bath will be discussed.

Keywords: 3G AHSS, hot-dip galvanizing, oxygen partial pressure, selective oxidation

Procedia PDF Downloads 393
1063 Springback Prediction for Sheet Metal Cold Stamping Using Convolutional Neural Networks

Authors: Lei Zhu, Nan Li

Abstract:

Cold stamping has been widely applied in the automotive industry for the mass production of a great range of automotive panels. Predicting the springback to ensure the dimensional accuracy of the cold-stamped components is a critical step. The main approaches for the prediction and compensation of springback in cold stamping include running Finite Element (FE) simulations and conducting experiments, which require forming process expertise and can be time-consuming and expensive for the design of cold stamping tools. Machine learning technologies have been proven and successfully applied in learning complex system behaviours using presentative samples. These technologies exhibit the promising potential to be used as supporting design tools for metal forming technologies. This study, for the first time, presents a novel application of a Convolutional Neural Network (CNN) based surrogate model to predict the springback fields for variable U-shape cold bending geometries. A dataset is created based on the U-shape cold bending geometries and the corresponding FE simulations results. The dataset is then applied to train the CNN surrogate model. The result shows that the surrogate model can achieve near indistinguishable full-field predictions in real-time when compared with the FE simulation results. The application of CNN in efficient springback prediction can be adopted in industrial settings to aid both conceptual and final component designs for designers without having manufacturing knowledge.

Keywords: springback, cold stamping, convolutional neural networks, machine learning

Procedia PDF Downloads 139
1062 Automated Distribution System Management: Substation Remote Diagnostic and Operation Solution for Obafemi Awolowo University

Authors: Aderonke Oluseun Akinwumi, Olusola A. Komolaf

Abstract:

This paper gives information about the wide array of challenges facing both the electric utilities and consumers in the distribution system in developing countries, using Obafemi Awolowo University, Ile-Ife Nigeria as a case study. It also proffers cost-effective solution through remote monitoring, diagnostic and operation of distribution networks without compromising the system reliability. As utilities move from manned and unintelligent networks to completely unmanned smart grids, switching activities at substations and feeders will be managed and controlled remotely by dedicated systems hence this design. The Substation Remote Diagnostic and Operation Solution (sRDOs) would remotely monitor the load on Medium Voltage (MV) and Low Voltage (LV) feeders as well as distribution transformers and allow the utility disconnect non-paying customers with absolutely no extra resource deployment and without interrupting supply to paying customers. The aftermath of the implementation of this design improved the lifetime of key distribution infrastructure by automatically isolating feeders during overload conditions and more importantly erring consumers. This increased the ratio of revenue generated on electricity bills to total network load.

Keywords: electric utility, consumers, remote monitoring, diagnostic, system reliability, manned and unintelligent networks, unmanned smart grids, switching activities, medium voltage, low voltage, distribution transformer

Procedia PDF Downloads 123
1061 Estimation of Twist Loss in the Weft Yarn during Air-Jet Weft Insertion

Authors: Muhammad Umair, Yasir Nawab, Khubab Shaker, Muhammad Maqsood, Adeel Zulfiqar, Danish Mahmood Baitab

Abstract:

Fabric is a flexible woven material consisting of a network of natural or artificial fibers often referred to as thread or yarn. Today fabrics are produced by weaving, braiding, knitting, tufting and non-woven. Weaving is a method of fabric production in which warp and weft yarns are interlaced perpendicular to each other. There is infinite number of ways for the interlacing of warp and weft yarn. Each way produces a different fabric structure. The yarns parallel to the machine direction are called warp yarns and the yarns perpendicular to the machine direction are called weft or filling yarns. Air jet weaving is the modern method of weft insertion and considered as high speed loom. The twist loss in air jet during weft insertion affects the strength. The aim of this study was to investigate the effect of twist change in weft yarn during air-jet weft insertion. A total number of 8 samples were produced using 1/1 plain and 3/1 twill weave design with two fabric widths having same loom settings. Two different types of yarns like cotton and PC blend were used. The effect of material type, weave design and fabric width on twist change of weft yarn was measured and discussed. Twist change in the different types of weft yarn and weave design was measured and compared the twist change in the weft yarn with the yarn before weft yarn insertion and twist loss is measured. Wider fabric leads to higher twist loss in the yarn.

Keywords: air jet loom, twist per inch, twist loss, weft yarn

Procedia PDF Downloads 396
1060 Neuroinflammation in Late-Life Depression: The Role of Glial Cells

Authors: Chaomeng Liu, Li Li, Xiao Wang, Li Ren, Qinge Zhang

Abstract:

Late-life depression (LLD) is a prevalent mental disorder among the elderly, frequently accompanied by significant cognitive decline, and has emerged as a worldwide public health concern. Microglia, astrocytes, and peripheral immune cells play pivotal roles in regulating inflammatory responses within the central nervous system (CNS) across diverse cerebral disorders. This review commences with the clinical research findings and accentuates the recent advancements pertaining to microglia and astrocytes in the neuroinflammation process of LLD. The reciprocal communication network between the CNS and immune system is of paramount importance in the pathogenesis of depression and cognitive decline. Stress-induced downregulation of tight and gap junction proteins in the brain results in increased blood-brain barrier permeability and impaired astrocyte function. Concurrently, activated microglia release inflammatory mediators, initiating the kynurenine metabolic pathway and exacerbating the quinolinic acid/kynurenic acid imbalance. Moreover, the balance between Th17 and Treg cells is implicated in the preservation of immune homeostasis within the cerebral milieu of individuals suffering from LLD. The ultimate objective of this review is to present future strategies for the management and treatment of LLD, informed by the most recent advancements in research, with the aim of averting or postponing the onset of AD.

Keywords: neuroinflammation, late-life depression, microglia, astrocytes, central nervous system, blood-brain barrier, Kynurenine pathway

Procedia PDF Downloads 33
1059 Downscaling Seasonal Sea Surface Temperature Forecasts over the Mediterranean Sea Using Deep Learning

Authors: Redouane Larbi Boufeniza, Jing-Jia Luo

Abstract:

This study assesses the suitability of deep learning (DL) for downscaling sea surface temperature (SST) over the Mediterranean Sea in the context of seasonal forecasting. We design a set of experiments that compare different DL configurations and deploy the best-performing architecture to downscale one-month lead forecasts of June–September (JJAS) SST from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0 (NUIST-CFS1.0) for the period of 1982–2020. We have also introduced predictors over a larger area to include information about the main large-scale circulations that drive SST over the Mediterranean Sea region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results showed that the convolutional neural network (CNN)-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme SST spatial patterns. Besides, the CNN-based downscaling yields a much more accurate forecast of extreme SST and spell indicators and reduces the significant relevant biases exhibited by the raw model predictions. Moreover, our results show that the CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of the Mediterranean Sea. The results demonstrate the potential usefulness of CNN in downscaling seasonal SST predictions over the Mediterranean Sea, particularly in providing improved forecast products.

Keywords: Mediterranean Sea, sea surface temperature, seasonal forecasting, downscaling, deep learning

Procedia PDF Downloads 70
1058 Implementation of Conceptual Real-Time Embedded Functional Design via Drive-By-Wire ECU Development

Authors: Ananchai Ukaew, Choopong Chauypen

Abstract:

Design concepts of real-time embedded system can be realized initially by introducing novel design approaches. In this literature, model based design approach and in-the-loop testing were employed early in the conceptual and preliminary phase to formulate design requirements and perform quick real-time verification. The design and analysis methodology includes simulation analysis, model based testing, and in-the-loop testing. The design of conceptual drive-by-wire, or DBW, algorithm for electronic control unit, or ECU, was presented to demonstrate the conceptual design process, analysis, and functionality evaluation. The concepts of DBW ECU function can be implemented in the vehicle system to improve electric vehicle, or EV, conversion drivability. However, within a new development process, conceptual ECU functions and parameters are needed to be evaluated. As a result, the testing system was employed to support conceptual DBW ECU functions evaluation. For the current setup, the system components were consisted of actual DBW ECU hardware, electric vehicle models, and control area network or CAN protocol. The vehicle models and CAN bus interface were both implemented as real-time applications where ECU and CAN protocol functionality were verified according to the design requirements. The proposed system could potentially benefit in performing rapid real-time analysis of design parameters for conceptual system or software algorithm development.

Keywords: drive-by-wire ECU, in-the-loop testing, model-based design, real-time embedded system

Procedia PDF Downloads 344
1057 Benefits of Environmental Aids to Chronobiology Management and Its Impact on Depressive Mood in an Operational Setting

Authors: M. Trousselard, D. Steiler, C. Drogou, P. van-Beers, G. Lamour, S. N. Crosnier, O. Bouilland, P. Dubost, M. Chennaoui, D. Léger

Abstract:

According to published data, undersea navigation for long periods (nuclear-powered ballistic missile submarine, SSBN) constitutes an extreme environment in which crews are subjected to multiple stresses, including the absence of natural light, illuminance below 1,000 lux, and watch schedules that do not respect natural chronobiological rhythms, for a period of 60-80 days. These stresses seem clearly detrimental to the submariners’ sleep, with consequences for their affective (seasonal affective disorder-like) and cognitive functioning. In the long term, there are abundant publications regarding the consequences of sleep disruption for the occurrence of organic cardiovascular, metabolic, immunological or malignant diseases. It seems essential to propose countermeasures for the duration of the patrol in order to reduce the negative physiological effects on the sleep and mood of submariners. Light therapy, the preferred treatment for dysfunctions of the internal biological clock and the resulting seasonal depression, cannot be used without data to assist knowledge of submariners’ chronobiology (melatonin secretion curve) during patrols, given the unusual characteristics of their working environment. These data are not available in the literature. The aim of this project was to assess, in the course of two studies, the benefits of two environmental techniques for managing chronobiological stress: techniques for optimizing potential (TOP; study 1)3, an existing programme to help in the psychophysiological regulation of stress and sleep in the armed forces, and dawn and dusk simulators (DDS, study 2). For each experiment, psychological, physiological (sleep) or biological (melatonin secretion) data were collected on D20 and D50 of patrol. In the first experiment, we studied sleep and depressive distress in 19 submariners in an operational setting on board an SSBM during a first patrol, and assessed the impact of TOP on the quality of sleep and depressive distress in these same submariners over the course of a second patrol. The submariners were trained in TOP between the two patrols for a 2-month period, at a rate of 1 h of training per week, and assigned daily informal exercises. Results show moderate disruptions in sleep pattern and duration associated with the intensity of depressive distress. The use of TOP during the following patrol improved sleep and depressive mood only in submariners who regularly practiced the techniques. In light of these limited benefits, we assessed, in a second experiment, the benefits of DDS on chronobiology (daily secretion of melatonin) and depressive distress. Ninety submariners were randomly allocated to two groups, group 1 using DDS daily, and group 2 constituting the control group. Although the placebo effect was not controlled, results showed a beneficial effect on chronobiology and depressive mood for submariners with a morning chronotype. Conclusions: These findings demonstrate the difficulty of practicing the tools of psychophysiological management in real life. They raise the question of the subjects’ autonomy with respect to using aids that involve regular practice. It seems important to study autonomy in future studies, as a cognitive resource resulting from the interaction between internal positive resources and “coping” resources, to gain a better understanding of compliance problems.

Keywords: chronobiology, light therapy, seasonal affective disorder, sleep, stress, stress management, submarine

Procedia PDF Downloads 453
1056 In-Farm Wood Gasification Energy Micro-Generation System in Brazil: A Monte Carlo Viability Simulation

Authors: Erich Gomes Schaitza, Antônio Francisco Savi, Glaucia Aparecida Prates

Abstract:

The penetration of renewable energy into the electricity supply in Brazil is high, one of the highest in the World. Centralized hydroelectric generation is the main source of energy, followed by biomass and wind. Surprisingly, mini and micro-generation are negligible, with less than 2,000 connections to the national grid. In 2015, a new regulatory framework was put in place to change this situation. In the agricultural sector, the framework was complemented by the offer of low interest rate loans to in-farm renewable generation. Brazil proposed to more than double its area of planted forests as part of its INDC- Intended Nationally Determined Contributions to the UNFCCC-U.N. Framework Convention on Climate Change (UNFCCC). This is an ambitious target which will be achieved only if forests are attractive to farmers. Therefore, this paper analyses whether planting forests for in-farm energy generation with a with a woodchip gasifier is economically viable for microgeneration under the new framework and at if they could be an economic driver for forest plantation. At first, a static case was analyzed with data from Eucalyptus plantations in five farms. Then, a broader analysis developed with the use of Monte Carlo technique. Planting short rotation forests to generate energy could be a viable alternative and the low interest loans contribute to that. There are some barriers to such systems such as the inexistence of a mature market for small scale equipment and of a reference network of good practices and examples.

Keywords: biomass, distribuited generation, small-scale, Monte Carlo

Procedia PDF Downloads 281
1055 Deep Learning Approach for Chronic Kidney Disease Complications

Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia

Abstract:

Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.

Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis

Procedia PDF Downloads 130