Search results for: single event upset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5627

Search results for: single event upset

4007 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 51
4006 The KAPSARC Energy Policy Database: Introducing a Quantified Library of China's Energy Policies

Authors: Philipp Galkin

Abstract:

Government policy is a critical factor in the understanding of energy markets. Regardless, it is rarely approached systematically from a research perspective. Gaining a precise understanding of what policies exist, their intended outcomes, geographical extent, duration, evolution, etc. would enable the research community to answer a variety of questions that, for now, are either oversimplified or ignored. Policy, on its surface, also seems a rather unstructured and qualitative undertaking. There may be quantitative components, but incorporating the concept of policy analysis into quantitative analysis remains a challenge. The KAPSARC Energy Policy Database (KEPD) is intended to address these two energy policy research limitations. Our approach is to represent policies within a quantitative library of the specific policy measures contained within a set of legal documents. Each of these measures is recorded into the database as a single entry characterized by a set of qualitative and quantitative attributes. Initially, we have focused on the major laws at the national level that regulate coal in China. However, KAPSARC is engaged in various efforts to apply this methodology to other energy policy domains. To ensure scalability and sustainability of our project, we are exploring semantic processing using automated computer algorithms. Automated coding can provide a more convenient input data for human coders and serve as a quality control option. Our initial findings suggest that the methodology utilized in KEPD could be applied to any set of energy policies. It also provides a convenient tool to facilitate understanding in the energy policy realm enabling the researcher to quickly identify, summarize, and digest policy documents and specific policy measures. The KEPD captures a wide range of information about each individual policy contained within a single policy document. This enables a variety of analyses, such as structural comparison of policy documents, tracing policy evolution, stakeholder analysis, and exploring interdependencies of policies and their attributes with exogenous datasets using statistical tools. The usability and broad range of research implications suggest a need for the continued expansion of the KEPD to encompass a larger scope of policy documents across geographies and energy sectors.

Keywords: China, energy policy, policy analysis, policy database

Procedia PDF Downloads 307
4005 Dy3+ Ions Doped Single and Mixed Alkali Fluoro Tungstunate Tellurite Glasses for Laser and White LED Applications

Authors: Allam Srinivasa Rao, Ch. Annapurna Devi, G. Vijaya Prakash

Abstract:

A new-fangled series of white light emitting 1 mol% of Dy3+ ions doped Single-Alklai and Mixed-Alkai fluoro tungstunate tellurite glasses have been prepared using melt quenching technique and their spectroscopic behaviour was investigated by studying XRD, optical absorption, photoluminescence and lifetime measurements. The bonding parameter studies reveal the ionic nature of the Dy-O bond in the present glasses. From the absorption spectra, the Judd–Ofelt (J-O) intensity parameters have been determined which are used to explore the nature of bonding and symmetry orientation of the Dy–ligand field environment. The evaluated J-O parameters (Ω_4>Ω_2>Ω_6) for all the glasses are following the same trend. The photoluminescence spectra of all the glasses exhibit two intensified peaks in blue and Yellow regions corresponding to the transitions 4F9/2→6H15/2 (483 nm) and 4F9/2→6H13/2 (575 nm) respectively. From the photoluminescence spectra, it is observed that the luminescence intensity is maximum for Dy3+ ion doped potassium combination of fluoro tungstunate tellurite glass (TeWK: 1Dy). The J-O intensity parameters have been used to determine the various radiative properties for the different emission transitions from the 4F9/2 fluorescent level. The highest emission cross-section and branching ratio values observed for the 4F9/2→6H15/2 and 4F9/2→6H13/2 transitions suggest the possible laser action in the visible region from these glasses. By using the experimental lifetimes (τ_exp) measured from the decay spectral features and radiative lifetimes (τ_R), the quantum efficiencies (η) for all the glasses have been evaluated. Among all the glasses, the potassium combined fluoro tungstunate tellurite (TeWK:1Dy) glass has the highest quantum efficiency (94.6%). The CIE colour chromaticity coordinates (x, y), (u, v), colour correlated temperature (CCT) and Y/B ratio were also estimated from the photoluminescence spectra for different compositions of glasses. The (x, y) and (u, v) chromaticity colour coordinates fall within the white light region and the white light can be tuned by varying the composition of the glass. From all these studies, we are suggesting that the 1 mol% of Dy3+ ions doped TeWK glass is more suitable for lasing and White-LED applications.

Keywords: dysprosium, Judd-Ofelt parameters, photo luminescence, tellurite glasses

Procedia PDF Downloads 213
4004 Global Optimization: The Alienor Method Mixed with Piyavskii-Shubert Technique

Authors: Guettal Djaouida, Ziadi Abdelkader

Abstract:

In this paper, we study a coupling of the Alienor method with the algorithm of Piyavskii-Shubert. The classical multidimensional global optimization methods involves great difficulties for their implementation to high dimensions. The Alienor method allows to transform a multivariable function into a function of a single variable for which it is possible to use efficient and rapid method for calculating the the global optimum. This simplification is based on the using of a reducing transformation called Alienor.

Keywords: global optimization, reducing transformation, α-dense curves, Alienor method, Piyavskii-Shubert algorithm

Procedia PDF Downloads 487
4003 Human Behaviour During an Earthquake: Descriptive Analysis on Indoor Video Recordings

Authors: Mazlum Çelik, Burcu Gürkan Ercan, Ahmet Ayaz, Hilal Yakut İpekoğlu, Furkan Baltacı, Mustafa Kurtoğlu, Bilge Kalkavan, Sinem Küçükyılmaz, Hikmet Çağrı Yardımcı, Şeyma Sevgican, Cemile Gökçe Elkovan, Bilal Çayır, Mehmet Emin Düzcan

Abstract:

The earthquake research literature generally examines emotional, cognitive, and behavioral responses after an earthquake. Studies concerning the behavioral responses to earthquakes reveal that after the earthquake, people either flee in a panic or do not act according to the stereotype that they act irrationally and anti-socially and sometimes give rational and adaptive reactions. However, the rareness of research dealing with human behavior experiencing the earthquake moment makes it necessary to pay particular attention to these behavior patterns. In this direction, this study aims to examine human behavior indoors in case of rising earthquake intensity. In Turkey, located on geography in the earthquake zone, devastating earthquakes took place, such as in "Istanbul" with a magnitude of 7.4 in 1999 and in "Elazığ" with a magnitude of 6.8 in 2020. Occurred recently, the "Kahramanmaraş" earthquake affected 11 provinces, with a magnitude of 7.7 and 7.6 in 2023. In addition, there is expected to be a devastating earthquake in Istanbul, experts warn. For this reason, it is essential to understand human behavior for disaster risk. Management and pre-disaster preparedness to be effective and efficient and to take realistic measures to protect human life. Mazlum Çelik, Burcu Gürkan Ercan, Ahmet Ayaz, Hilal Yakut İpekoğlu, Furkan Baltacı, Mustafa Kurtoğlu, Bilge Kalkavan, Sinem Küçükyılmaz, Hikmet Çağrı Yardımcı, Şeyma Sevgican, Cemile Gökçe Elkovan, Bilal Çayır, Mehmet Emin Düzcan. In this study, which is currently part of a project supported by The Scientific and Technological Council of Turkey (TUBITAK), the indoor recordings during the earthquakes in Elazig on January 24, 2020, and in İzmir on October 30, 2020, are examined, and the people's behavior during the earthquake is analyzed. In this direction, video recordings taken from the YouTube archives of İzmir and Elazığ Disaster and Emergency Management Presidency (AFAD) Directorates and metropolitan municipalities are examined. The researchers have created an observation form in line with the information in the relevant literature to classify people's behavior during an earthquake. It is intended to determine the behavioral patterns by classifying according to the form and video analysis of the people heading toward the door, remaining stable, taking protective measures, turning to people, and engaging in "other" behaviors outside of these behaviors during the earthquake. A total of 60 video analyzes are carried out from Elazığ and İzmir. The descriptive statistic has been used with the SPSS 23.0 package program in the data analysis. It is found that in the event of an increase in the severity of the earthquake, unlike Elazığ, in İzmir, protective action is preferred to the act of remaining stable. In addition, it is observed that with the increase in the earthquake's intensity, women attempt to take more protective action while men head toward the door. In contrast, a rise is observed in the behavior of young people heading toward the door and taking protective actions, while there is a decrease in their behavior directing to people. These findings, unlike the literature, reveal that human behavior during earthquakes cannot be reduced to a single behavior pattern, such as drop-cover-hold-on. The results show that it is necessary to understand the behaviors of individuals during the earthquake and to develop practical policy proposals for combating earthquakes by considering sociocultural, geographical, and demographic variables.

Keywords: descriptive analysis, earthquake, human behaviour, disaster policy.

Procedia PDF Downloads 78
4002 One or More Building Information Modeling Managers in France: The Confusion of the Kind

Authors: S. Blanchard, D. Beladjine, K. Beddiar

Abstract:

Since 2015, the arrival of BIM in the building sector in France has turned the corporation world upside down. Not only constructive practices have been impacted, but also the uses and the men who have undergone important changes. Thus, the new collaborative mode generated by the BIM and the digital model has challenged the supremacy of some construction actors because the process involves working together taking into account the needs of other contributors. New BIM tools have emerged and actors in the act of building must take ownership of them. It is in this context that under the impetus of a European directive and the French government's encouragement of new missions and job profiles have. Moreover, concurrent engineering requires that each actor can advance at the same time as the others, at the whim of the information that reaches him, and the information he has to transmit. However, in the French legal system around public procurement, things are not planned in this direction. Also, a consequent evolution must take place to adapt to the methodology. The new missions generated by the BIM in France require a good mastery of the tools and the process. Also, to meet the objectives of the BIM approach, it is possible to define a typical job profile around the BIM, adapted to the various sectors concerned. The multitude of job offers using the same terms with very different objectives and the complexity of the proposed missions motivated by our approach. In order to reinforce exchanges with professionals or specialists, we carried out a statistical study to answer this problem. Five topics are discussed around the business area: the BIM in the company, the function (business), software used and BIM missions practiced (39 items). About 1400 professionals were interviewed. These people work in companies (micro businesses, SMEs, and Groups) of construction, engineering offices or, architectural agencies. 77% of respondents have the status of employees. All participants are graduated in their trade, the majority having level 1. Most people have less than a year of experience in BIM, but some have 10 years. The results of our survey help to understand why it is not possible to define a single type of BIM Manager. Indeed, the specificities of the companies are so numerous and complex and the missions so varied, that there is not a single model for a function. On the other hand, it was possible to define 3 main professions around the BIM (Manager, Coordinator and Modeler) and 3 main missions for the BIM Manager (deployment of the method, assistance to project management and management of a project).

Keywords: BIM manager, BIM modeler, BIM coordinator, project management

Procedia PDF Downloads 153
4001 A Clustering-Based Approach for Weblog Data Cleaning

Authors: Amine Ganibardi, Cherif Arab Ali

Abstract:

This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.

Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data

Procedia PDF Downloads 163
4000 A Study on Stochastic Integral Associated with Catastrophes

Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan

Abstract:

We analyze stochastic integrals associated with a mutation process. To be specific, we describe the cell population process and derive the differential equations for the joint generating functions for the number of mutants and their integrals in generating functions and their applications. We obtain first-order moments of the processes of the two-way mutation process in first-order moment structure of X (t) and Y (t) and the second-order moments of a one-way mutation process. In this paper, we obtain the limiting behaviour of the integrals in limiting distributions of X (t) and Y (t).

Keywords: stochastic integrals, single–server queue model, catastrophes, busy period

Procedia PDF Downloads 627
3999 Role of Estrogen Receptor-alpha in Mammary Carcinoma by Single Nucleotide Polymorphisms and Molecular Docking: An In-silico Analysis

Authors: Asif Bilal, Fouzia Tanvir, Sibtain Ahmad

Abstract:

Estrogen receptor alpha, also known as estrogen receptor-1, is highly involved in risk of mammary carcinoma. The objectives of this study were to identify non-synonymous SNPs of estrogen receptor and their association with breast cancer and to identify the chemotherapeutic responses of phytochemicals against it via in-silico study design. For this purpose, different online tools. to identify pathogenic SNPs the tools were SIFT, Polyphen, Polyphen-2, fuNTRp, SNAP2, for finding disease associated SNPs the tools SNP&GO, PhD-SNP, PredictSNP, MAPP, SNAP, MetaSNP, PANTHER, and to check protein stability Mu-Pro, I-Mutant, and CONSURF were used. Post-translational modifications (PTMs) were detected by Musitedeep, Protein secondary structure by SOPMA, protein to protein interaction by STRING, molecular docking by PyRx. Seven SNPs having rsIDs (rs760766066, rs779180038, rs956399300, rs773683317, rs397509428, rs755020320, and rs1131692059) showing mutations on I229T, R243C, Y246H, P336R, Q375H, R394S, and R394H, respectively found to be completely deleterious. The PTMs found were 96 times Glycosylation; 30 times Ubiquitination, a single time Acetylation; and no Hydroxylation and Phosphorylation were found. The protein secondary structure consisted of Alpha helix (Hh) is (28%), Extended strand (Ee) is (21%), Beta turn (Tt) is 7.89% and Random coil (Cc) is (44.11%). Protein-protein interaction analysis revealed that it has strong interaction with Myeloperoxidase, Xanthine dehydrogenase, carboxylesterase 1, Glutathione S-transferase Mu 1, and with estrogen receptors. For molecular docking we used Asiaticoside, Ilekudinuside, Robustoflavone, Irinoticane, Withanolides, and 9-amin0-5 as ligands that extract from phytochemicals and docked with this protein. We found that there was great interaction (from -8.6 to -9.7) of these ligands of phytochemicals at ESR1 wild and two mutants (I229T and R394S). It is concluded that these SNPs found in ESR1 are involved in breast cancer and given phytochemicals are highly helpful against breast cancer as chemotherapeutic agents. Further in vitro and in vivo analysis should be performed to conduct these interactions.

Keywords: breast cancer, ESR1, phytochemicals, molecular docking

Procedia PDF Downloads 53
3998 Orthogonal Basis Extreme Learning Algorithm and Function Approximation

Authors: Ying Li, Yan Li

Abstract:

A new algorithm for single hidden layer feedforward neural networks (SLFN), Orthogonal Basis Extreme Learning (OBEL) algorithm, is proposed and the algorithm derivation is given in the paper. The algorithm can decide both the NNs parameters and the neuron number of hidden layer(s) during training while providing extreme fast learning speed. It will provide a practical way to develop NNs. The simulation results of function approximation showed that the algorithm is effective and feasible with good accuracy and adaptability.

Keywords: neural network, orthogonal basis extreme learning, function approximation

Procedia PDF Downloads 519
3997 The Effect of Different Parameters on a Single Invariant Lateral Displacement Distribution to Consider the Higher Modes Effect in a Displacement-Based Pushover Procedure

Authors: Mohamad Amin Amini, Mehdi Poursha

Abstract:

Nonlinear response history analysis (NL-RHA) is a robust analytical tool for estimating the seismic demands of structures responding in the inelastic range. However, because of its conceptual and numerical complications, the nonlinear static procedure (NSP) is being increasingly used as a suitable tool for seismic performance evaluation of structures. The conventional pushover analysis methods presented in various codes (FEMA 356; Eurocode-8; ATC-40), are limited to the first-mode-dominated structures, and cannot take higher modes effect into consideration. Therefore, since more than a decade ago, researchers developed enhanced pushover analysis procedures to take higher modes effect into account. The main objective of this study is to propose an enhanced invariant lateral displacement distribution to take higher modes effect into consideration in performing a displacement-based pushover analysis, whereby a set of laterally applied displacements, rather than forces, is monotonically applied to the structure. For this purpose, the effect of different parameters such as the spectral displacement of ground motion, the modal participation factor, and the effective modal participating mass ratio on the lateral displacement distribution is investigated to find the best distribution. The major simplification of this procedure is that the effect of higher modes is concentrated into a single invariant lateral load distribution. Therefore, only one pushover analysis is sufficient without any need to utilize a modal combination rule for combining the responses. The invariant lateral displacement distribution for pushover analysis is then calculated by combining the modal story displacements using the modal combination rules. The seismic demands resulting from the different procedures are compared to those from the more accurate nonlinear response history analysis (NL-RHA) as a benchmark solution. Two structures of different heights including 10 and 20-story special steel moment resisting frames (MRFs) were selected and evaluated. Twenty ground motion records were used to conduct the NL-RHA. The results show that more accurate responses can be obtained in comparison with the conventional lateral loads when the enhanced modal lateral displacement distributions are used.

Keywords: displacement-based pushover, enhanced lateral load distribution, higher modes effect, nonlinear response history analysis (NL-RHA)

Procedia PDF Downloads 262
3996 Productivity Improvement in the Propeller Shaft Manufacturing Process

Authors: Won Jung

Abstract:

In automotive, propeller shaft is the device for transferring power from engine to axle via transmission, and the slip yoke is one of the main parts in the component. Since the propeller shafts are subject to torsion and shear stress, they need to be strong enough to bear the stress. The purpose of this research is to improve the productivity of slip yoke for automotive propeller shaft. We present how to redesign the component that currently manufactured as a forged single body type. The research was focused on not only reducing processing time but insuring durability of the component simultaneously.

Keywords: automotive, propeller shaft, productivity, durability, slip yoke

Procedia PDF Downloads 366
3995 Developing NAND Flash-Memory SSD-Based File System Design

Authors: Jaechun No

Abstract:

This paper focuses on I/O optimizations of N-hybrid (New-Form of hybrid), which provides a hybrid file system space constructed on SSD and HDD. Although the promising potentials of SSD, such as the absence of mechanical moving overhead and high random I/O throughput, have drawn a lot of attentions from IT enterprises, its high ratio of cost/capacity makes it less desirable to build a large-scale data storage subsystem composed of only SSDs. In this paper, we present N-hybrid that attempts to integrate the strengths of SSD and HDD, to offer a single, large hybrid file system space. Several experiments were conducted to verify the performance of N-hybrid.

Keywords: SSD, data section, I/O optimizations, hybrid system

Procedia PDF Downloads 403
3994 Strong Microcapsules with Macroporous Polymer Shells

Authors: Eve S. A. Loiseau, Marion Frey, Yves Blickenstorfer, Fabian Niedermair, André R. Studart

Abstract:

Porous microcapsules have a broad range of applications that require a robust shell. We propose a new method to produce macroporous polymer capsules with controlled size, shell thickness, porosity and mechanical properties using co-flow flow-focusing glass capillary devices. The porous structure was investigated through SEM and the permeability through confocal microscopy. Compression tests on single capsules were performed. We obtained microcapsules with tailored permeability from open to close pores structures and able to withstand loads up to 150 g.

Keywords: microcapsules, micromechanics, porosity, polymer shells

Procedia PDF Downloads 432
3993 The Potential of M-Government towards Successful Implementation of E-Government in Saudi Arabia

Authors: Majed Ahmed Alfayad

Abstract:

Technology is now present in almost all areas and practices globally, and this has led governments around the world to adopt technology in the public sector. Therefore, electronic government has been introduced as a means of the automation of government services. New technologies and trends appear every single day, and governments need to meet the citizen’s requirements and expectations in order to succeed in the E-Government program. This research investigates the potential of mobile government as an enhancement force for the E-Government project in the Kingdom of Saudi Arabia, where the usage of mobile technology is coming to be favoured by citizens. Qualitative methodology has been adopted in this study for the data collection and analysis, and in particular the grounded theory approach.

Keywords: e-government, e-participation, m-government, mobile technology

Procedia PDF Downloads 318
3992 The Implementation of Child Adoption as Legal Protection of Children

Authors: Sonny Dewi Judiasih

Abstract:

The principle of a marriage is to achieve a happy and eternity family based on the willing of the God. The family has a fundamental role in the society as a social individual and as a nuclear family consists of father, mother, and children. Thus, each family always would like to have children who will continue the family. However, not all family will be blessed with children and consequently, there is family without children. Therefore, the said the certain family will do any effort to fulfill the wish to have children. One of the ways is to adopt children. The implementation of child adoption is conducted by the family who does not have children but sometimes child adoption is conducted by a family who has already children. The implementation of child adoption is based on the interest of the welfare and the intellectual of the said child. Moreover, it should be based on the social liability of the individual in accordance with the developing of the traditional values as part of the nation culture. The child adoption is conducted for the welfare of the child demonstrates that a change on the basic motive (value) whereby in the past the child adoption is to fulfill the wish of foster parent (to have children in the family). Nowadays the purpose of child adoption is not merely for the interest of foster parent but in particular for the interest, welfare and the future of the child. The development of the society has caused the occurrence of changes of perspective in the society which lead to a need for new law. The court of justice has an impact of such changes. It is evidenced by the court order for child adoption in the legal framework of certainty of law. The changes of motives (value) of the child adoption in the society can be fully understood in the event that the society fully understand that the ultimate purpose of Indonesia nation is to achieve a justice and prosperity society, i.e., social welfare for all Indonesian people.

Keywords: child adoption, family law, legal protection, children

Procedia PDF Downloads 452
3991 Multi-Source Data Fusion for Urban Comprehensive Management

Authors: Bolin Hua

Abstract:

In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.

Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data

Procedia PDF Downloads 370
3990 Food Package Design To Preserve The Food Temperature

Authors: Sugiono, Wuwus Ardiatna, Himma Firdaus, Nanang Kusnandar, Bayu Utomo, Jimmy Abdel Kadar

Abstract:

This study was aimed to explore the best design of single-used hot food packaging through various package designs. It examined how designed packages keep some local hot food reasonably longer than standard packages. The food packages were realized to consist of the outer and the inner layers of food-grade materials. The packages were evaluated to keep the hot food decreased to the minimum temperature of safe food. This study revealed a significant finding that the transparent plastic box with thin film aluminum foil is the best package.

Keywords: hot food, local food, one used, packaging, aluminum foil

Procedia PDF Downloads 133
3989 Analysis of Bed Load Sediment Transport Mataram-Babarsari Irrigation Canal

Authors: Agatha Padma Laksitaningtyas, Sumiyati Gunawan

Abstract:

Mataram Irrigation Canal has 31,2 km length, is the main irrigation canal in Special Region Province of Yogyakarta, connecting Progo River on the west side and Opak River on the east side. It has an important role as the main water carrier distribution for various purposes such as agriculture, fishery, and plantation which should be free from sediment material. Bed Load Sediment is the basic sediment that will make the sediment process on the irrigation canal. Sediment process is a simultaneous event that can make deposition sediment at the base of irrigation canal and can make the height of elevation water change, it will affect the availability of water to be used for irrigation functions. To predict the amount of drowning sediments in the irrigation canal using two methods: Meyer-Peter and Muller’s Method which is an energy approach method and Einstein Method which is a probabilistic approach. Speed measurement using floating method and using current meters. The channel geometry is measured directly in the field. The basic sediment of the channel is taken in the field by taking three samples from three different points. The result of the research shows that by using the formula Meyer -Peter Muller get the result of 60,75799 kg/s, whereas with Einsten’s Method get result of 13,06461 kg/s. the results may serve as a reference for dredging the sediments on the channel so as not to disrupt the flow of water in irrigation canal.

Keywords: bed load, sediment, irrigation, Mataram canal

Procedia PDF Downloads 211
3988 Parametric Modeling for Survival Data with Competing Risks Using the Generalized Gompertz Distribution

Authors: Noora Al-Shanfari, M. Mazharul Islam

Abstract:

The cumulative incidence function (CIF) is a fundamental approach for analyzing survival data in the presence of competing risks, which estimates the marginal probability for each competing event. Parametric modeling of CIF has the advantage of fitting various shapes of CIF and estimates the impact of covariates with maximum efficiency. To calculate the total CIF's covariate influence using a parametric model., it is essential to parametrize the baseline of the CIF. As the CIF is an improper function by nature, it is necessary to utilize an improper distribution when applying parametric models. The Gompertz distribution, which is an improper distribution, is limited in its applicability as it only accounts for monotone hazard shapes. The generalized Gompertz distribution, however, can adapt to a wider range of hazard shapes, including unimodal, bathtub, and monotonic increasing or decreasing hazard shapes. In this paper, the generalized Gompertz distribution is used to parametrize the baseline of the CIF, and the parameters of the proposed model are estimated using the maximum likelihood approach. The proposed model is compared with the existing Gompertz model using the Akaike information criterion. Appropriate statistical test procedures and model-fitting criteria will be used to test the adequacy of the model. Both models are applied to the ‘colon’ dataset, which is available in the “biostat3” package in R.

Keywords: competing risks, cumulative incidence function, improper distribution, parametric modeling, survival analysis

Procedia PDF Downloads 74
3987 Predicting Root Cause of a Fire Incident through Transient Simulation

Authors: Mira Ezora Zainal Abidin, Siti Fauzuna Othman, Zalina Harun, M. Hafiz M. Pikri

Abstract:

In a fire incident involving a Nitrogen storage tank that over-pressured and exploded, resulting in a fire in one of the units in a refinery, lack of data and evidence hampered the investigation to determine the root cause. Instrumentation and fittings were destroyed in the fire. To make it worst, this incident occurred during the COVID-19 pandemic, making collecting and testing evidence delayed. In addition to that, the storage tank belonged to a third-party company which requires legal agreement prior to the refinery getting approval to test the remains. Despite all that, the investigation had to be carried out with stakeholders demanding answers. The investigation team had to devise alternative means to support whatever little evidence came out as the most probable root cause. International standards, practices, and previous incidents on similar tanks were referred. To narrow down to just one root cause from 8 possible causes, transient simulations were conducted to simulate the overpressure scenarios to prove and eliminate the other causes, leaving one root cause. This paper shares the methodology used and details how transient simulations were applied to help solve this. The experience and lessons learned gained from the event investigation and from numerous case studies via transient analysis in finding the root cause of the accident leads to the formulation of future mitigations and design modifications aiming at preventing such incidents or at least minimize the consequences from the fire incident.

Keywords: fire, transient, simulation, relief

Procedia PDF Downloads 80
3986 Accelerated Structural Reliability Analysis under Earthquake-Induced Tsunamis by Advanced Stochastic Simulation

Authors: Sai Hung Cheung, Zhe Shao

Abstract:

Recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 brought huge losses of lives and properties. Maintaining vertical evacuation systems is the most crucial strategy to effectively reduce casualty during the tsunami event. Thus, it is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability (or its complement failure probability) of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of the Subset Simulation algorithm and a recently proposed moving least squares response surface approach for stochastic sampling is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.

Keywords: response surface model, subset simulation, structural reliability, Tsunami risk

Procedia PDF Downloads 363
3985 Facile Synthesis of Metal Nanoparticles on Graphene via Galvanic Displacement Reaction for Sensing Application

Authors: Juree Hong, Sanggeun Lee, Jungmok Seo, Taeyoon Lee

Abstract:

We report a facile synthesis of metal nano particles (NPs) on graphene layer via galvanic displacement reaction between graphene-buffered copper (Cu) and metal ion-containing salts. Diverse metal NPs can be formed on graphene surface and their morphologies can be tailored by controlling the concentration of metal ion-containing salt and immersion time. The obtained metal NP-decorated single-layer graphene (SLG) has been used as hydrogen gas (H2) sensing material and exhibited highly sensitive response upon exposure to 2% of H2.

Keywords: metal nanoparticle, galvanic displacement reaction, graphene, hydrogen sensor

Procedia PDF Downloads 410
3984 Single Cell Analysis of Circulating Monocytes in Prostate Cancer Patients

Authors: Leander Van Neste, Kirk Wojno

Abstract:

The innate immune system reacts to foreign insult in several unique ways, one of which is phagocytosis of perceived threats such as cancer, bacteria, and viruses. The goal of this study was to look for evidence of phagocytosed RNA from tumor cells in circulating monocytes. While all monocytes possess phagocytic capabilities, the non-classical CD14+/FCGR3A+ monocytes and the intermediate CD14++/FCGR3A+ monocytes most actively remove threatening ‘external’ cellular materials. Purified CD14-positive monocyte samples from fourteen patients recently diagnosed with clinically localized prostate cancer (PCa) were investigated by single-cell RNA sequencing using the 10X Genomics protocol followed by paired-end sequencing on Illumina’s NovaSeq. Similarly, samples were processed and used as controls, i.e., one patient underwent biopsy but was found not to harbor prostate cancer (benign), three young, healthy men, and three men previously diagnosed with prostate cancer that recently underwent (curative) radical prostatectomy (post-RP). Sequencing data were mapped using 10X Genomics’ CellRanger software and viable cells were subsequently identified using CellBender, removing technical artifacts such as doublets and non-cellular RNA. Next, data analysis was performed in R, using the Seurat package. Because the main goal was to identify differences between PCa patients and ‘control’ patients, rather than exploring differences between individual subjects, the individual Seurat objects of all 21 patients were merged into one Seurat object per Seurat’s recommendation. Finally, the single-cell dataset was normalized as a whole prior to further analysis. Cell identity was assessed using the SingleR and cell dex packages. The Monaco Immune Data was selected as the reference dataset, consisting of bulk RNA-seq data of sorted human immune cells. The Monaco classification was supplemented with normalized PCa data obtained from The Cancer Genome Atlas (TCGA), which consists of bulk RNA sequencing data from 499 prostate tumor tissues (including 1 metastatic) and 52 (adjacent) normal prostate tissues. SingleR was subsequently run on the combined immune cell and PCa datasets. As expected, the vast majority of cells were labeled as having a monocytic origin (~90%), with the most noticeable difference being the larger number of intermediate monocytes in the PCa patients (13.6% versus 7.1%; p<.001). In men harboring PCa, 0.60% of all purified monocytes were classified as harboring PCa signals when the TCGA data were included. This was 3-fold, 7.5-fold, and 4-fold higher compared to post-RP, benign, and young men, respectively (all p<.001). In addition, with 7.91%, the number of unclassified cells, i.e., cells with pruned labels due to high uncertainty of the assigned label, was also highest in men with PCa, compared to 3.51%, 2.67%, and 5.51% of cells in post-RP, benign, and young men, respectively (all p<.001). It can be postulated that actively phagocytosing cells are hardest to classify due to their dual immune cell and foreign cell nature. Hence, the higher number of unclassified cells and intermediate monocytes in PCa patients might reflect higher phagocytic activity due to tumor burden. This also illustrates that small numbers (~1%) of circulating peripheral blood monocytes that have interacted with tumor cells might still possess detectable phagocytosed tumor RNA.

Keywords: circulating monocytes, phagocytic cells, prostate cancer, tumor immune response

Procedia PDF Downloads 151
3983 Thyroid Stimulating Hormone Is a Biomarker for Stress: A Prospective Longitudinal Study

Authors: Jeonghun Lee

Abstract:

Thyroid-stimulating hormone (TSH) is regulated by the negative feedback of T3 and T4 but is affected by cortisol and cytokines during allostasis. Hence, TSH levels can be influenced by stress through cortisol. In the present study, changes in TSH levels under stress and the potential of TSH as a stress marker were examined in patients lacking T3 or T4 feedback after thyroid surgery. The three stress questionnaires (Korean version of the Daily Stress Inventory, Social Readjustment Rating Scale, and Stress Overload Scale-Short [SOSS]), open-ended question (OQ), and thyroid function tests were performed twice in 106 patients enrolled from January 2019 to October 2020. Statistical analysis was performed using the generalized linear mixed effect model (GLMM) in R software version 4.1.0. In a multiple LMM involving 106 patients, T3, T4, SOSS (category), open-ended questions, the extent of thyroidectomy, and preoperative TSH were significantly correlated with lnTSH. T3 and T4 increased by 1 and lnTSH decreased by 0.03, 3.52, respectively. In case of a stressful event on OQ, lnTSH increased by 1.55. In the high-risk group, lnTSH increased by 0.79, compared with the low group (p<0.05). TSH had a significant relationship with stress, together with T3, T4, and the extent of thyroidectomy. As such, it has the potential to be used as a stress marker, though it showed a low correlation with other stress questionnaires. To address this limitation, questionnaires on various social environments and research on copy strategies are necessary for future studies.

Keywords: stress, surgery, thyroid stimulating hormone, thyroidectomy

Procedia PDF Downloads 77
3982 Breast Cancer Sensing and Imaging Utilized Printed Ultra Wide Band Spherical Sensor Array

Authors: Elyas Palantei, Dewiani, Farid Armin, Ardiansyah

Abstract:

High precision of printed microwave sensor utilized for sensing and monitoring the potential breast cancer existed in women breast tissue was optimally computed. The single element of UWB printed sensor that successfully modeled through several numerical optimizations was multiple fabricated and incorporated with woman bra to form the spherical sensors array. One sample of UWB microwave sensor obtained through the numerical computation and optimization was chosen to be fabricated. In overall, the spherical sensors array consists of twelve stair patch structures, and each element was individually measured to characterize its electrical properties, especially the return loss parameter. The comparison of S11 profiles of all UWB sensor elements is discussed. The constructed UWB sensor is well verified using HFSS programming, CST programming, and experimental measurement. Numerically, both HFSS and CST confirmed the potential operation bandwidth of UWB sensor is more or less 4.5 GHz. However, the measured bandwidth provided is about 1.2 GHz due to the technical difficulties existed during the manufacturing step. The configuration of UWB microwave sensing and monitoring system implemented consists of 12 element UWB printed sensors, vector network analyzer (VNA) to perform as the transceiver and signal processing part, the PC Desktop/Laptop acting as the image processing and displaying unit. In practice, all the reflected power collected from whole surface of artificial breast model are grouped into several numbers of pixel color classes positioned on the corresponding row and column (pixel number). The total number of power pixels applied in 2D-imaging process was specified to 100 pixels (or the power distribution pixels dimension 10x10). This was determined by considering the total area of breast phantom of average Asian women breast size and synchronizing with the single UWB sensor physical dimension. The interesting microwave imaging results were plotted and together with some technical problems arisen on developing the breast sensing and monitoring system are examined in the paper.

Keywords: UWB sensor, UWB microwave imaging, spherical array, breast cancer monitoring, 2D-medical imaging

Procedia PDF Downloads 179
3981 Design and Implementation of Grid-Connected Photovoltaic Inverter

Authors: B. H. Lee

Abstract:

Nowadays, a grid-connected photovoltaic (PV) inverter is adopted in various places like as home, factory, because grid-connected PV inverter can reduce total power consumption by supplying electricity from PV array. In this paper, design and implementation of a 300 W grid-connected PV inverter are described. It is implemented with TI Piccolo DSP core and operated at 100 kHz switching frequency in order to reduce harmonic contents. The maximum operating input voltage is up to 45 V. The characteristics of the designed system that include maximum power point tracking (MPPT), single operation and battery charging are verified by simulation and experimental results.

Keywords: design, grid-connected, implementation, photovoltaic

Procedia PDF Downloads 404
3980 Landslide Study Using Unmanned Aerial Vehicle and Resistivity Survey at Bkt Kukus, Penang Island, Malaysia

Authors: Kamal Bahrin Jaafar

Abstract:

The study area is located at Bukit Kukus, Penang where the construction of twin road project in ongoing. A landslide event has occurred on 19th October 2018, which causes fatal deaths. The purpose of this study is to figure out the causes of failure, the estimated volume of failure, and its balance. The study comprises of unmanned aerial vehicle (UAV) sensing and resistivity survey. The resistivity method includes spreading three lines of 200m length resistivity survey with the depth of penetration in the subsurface not exceeding 35m. The result of UAV shows the current view of the site condition. Based on resistivity result, the dominant layer in the study area consists of residual soil/filling material with a thickness of more than 35m. Three selected cross sections from construction drawing are overlain with the current cross sections to understand more on the condition of the subsurface profile. By comparison, there is a difference between past and present topography. The combination of result from the previous data and current condition shows the calculated volume of failure is 85,000 m³, and its balance is 50,000 m³. In conclusion, the failure occurs since the contractor has conducted the construction works without following the construction drawing supplied by the consultant. Besides, the cause of failure is triggered by the geology condition, such as a fault that should be considered prior to the commencement of work.

Keywords: UAV, landslide, resistivity survey, cause of failure

Procedia PDF Downloads 102
3979 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 280
3978 Farmers Perception in Pesticide Usage in Curry Leaf (Murraya koeinigii (L.))

Authors: Swarupa Shashi Senivarapu Vemuri

Abstract:

Curry leaf (Murraya koeinigii (L.)) exported from India had insecticide residues above maximum residue limits, which are hazardous to consumer health and caused rejection of the commodity at the point of entry in Europe and middle east resulting in a check on export of curry leaf. Hence to study current pesticide usage patterns in major curry leaf growing areas, a survey on pesticide use pattern was carried out in curry leaf growing areas in Guntur districts of Andhra Pradesh during 2014-15, by interviewing farmers growing curry leaf utilizing the questionnaire to assess their knowledge and practices on crop cultivation, general awareness on pesticide recommendations and use. Education levels of farmers are less, where 13.96 per cent were only high school educated, and 13.96% were illiterates. 18.60% farmers were found cultivating curry leaf crop in less than 1 acre of land, 32.56% in 2-5 acres, 20.93% in 5-10 acres and 27.91% of the farmers in more than 10 acres of land. Majority of the curry leaf farmers (93.03%) used pesticide mixtures rather than applying single pesticide at a time, basically to save time, labour, money and to combat two or more pests with single spray. About 53.48% of farmers applied pesticides at 2 days interval followed by 34.89% of the farmers at 4 days interval, and about 11.63% of the farmers sprayed at weekly intervals. Only 27.91% of farmers thought that the quantity of pesticides used at their farm is adequate, 90.69% of farmers had perception that pesticides are helpful in getting good returns. 83.72% of farmers felt that crop change is the only way to control sucking pests which damages whole crop. About 4.65% of the curry leaf farmers opined that integrated pest management practices are alternative to pesticides and only 11.63% of farmers felt natural control as an alternative to pesticides. About 65.12% of farmers had perception that high pesticide dose will give higher yields. However, in general, Curry leaf farmers preferred to contact pesticide dealers (100%) and were not interested in contacting either agricultural officer or a scientist. Farmers were aware of endosulfan ban 93.04%), in contrast, only 65.12, per cent of farmers knew about the ban of monocrotophos on vegetables. Very few farmers knew about pesticide residues and decontamination by washing. Extension educational interventions are necessary to produce fresh curry leaf free from pesticide residues.

Keywords: Curry leaf, decontamination, endosulfan, leaf roller, psyllids, tetranychid mite

Procedia PDF Downloads 316