Search results for: incomplete data
25215 Improving Collective Health and Social Care through a Better Consideration of Sex and Gender: Analytical Report by the French National Authority for Health
Authors: Thomas Suarez, Anne-Sophie Grenouilleau, Erwan Autin, Alexandre Biosse-Duplan, Emmanuelle Blondet, Laurence Chazalette, Marie Coniel, Agnes Dessaigne, Sylvie Lascols, Andrea Lasserre, Candice Legris, Pierre Liot, Aline Metais, Karine Petitprez, Christophe Varlet, Christian Saout
Abstract:
Background: The role of biological sex and gender identity -whether assigned or chosen- as health determinants are far from a recent discovery: several reports have stressed out how being a woman or a man could affect health on various scales. However, taking it into consideration beyond stereotypes and rigid binary assumptions still seems to be a work in progress. Method: The report is a synthesis on a variety of specific topics, each of which was studied by a specialist from the French National Authority for Health (HAS), through an analysis of existing literature on both healthcare policy construction process and instruments (norms, data analysis, clinical trials, guidelines, and professional practices). This work also implied a policy analysis of French recent public health laws and a retrospective study of guidelines with a gender mainstreaming approach. Results: The analysis showed that though sex and gender were well-known determinants of health, their consideration by both public policy and health operators was often incomplete, as it does not incorporate how sex and gender interact, as well as how they interact with other factors. As a result, the health and social care systems and their professionals tend to reproduce some stereotypical and inadequate habits. Though the data available often allows to take sex and gender into consideration, such data is often underused in practice guidelines and policy formulation. Another consequence is a lack of inclusiveness towards transgender or intersex persons. Conclusions: This report first urges for raising awareness of all the actors of health, in its broadest definition, that sex and gender matter beyond first-look conclusions. It makes a series of recommendations in order to reshape policy construction in the health sector on the one hand and to design public health instruments to make them more inclusive regarding sex and gender on the other hand. The HAS finally committed to integrate sex and gender preoccupations in its workings methods, to be a driving force in the spread of these concerns.Keywords: biological sex, determinants of health, gender, healthcare policy instruments, social accompaniment
Procedia PDF Downloads 12825214 Cleaning of Polycyclic Aromatic Hydrocarbons (PAH) Obtained from Ferroalloys Plant
Authors: Stefan Andersson, Balram Panjwani, Bernd Wittgens, Jan Erik Olsen
Abstract:
Polycyclic Aromatic hydrocarbons are organic compounds consisting of only hydrogen and carbon aromatic rings. PAH are neutral, non-polar molecules that are produced due to incomplete combustion of organic matter. These compounds are carcinogenic and interact with biological nucleophiles to inhibit the normal metabolic functions of the cells. Norways, the most important sources of PAH pollution is considered to be aluminum plants, the metallurgical industry, offshore oil activity, transport, and wood burning. Stricter governmental regulations regarding emissions to the outer and internal environment combined with increased awareness of the potential health effects have motivated Norwegian metal industries to increase their efforts to reduce emissions considerably. One of the objective of the ongoing industry and Norwegian research council supported "SCORE" project is to reduce potential PAH emissions from an off gas stream of a ferroalloy furnace through controlled combustion. In a dedicated combustion chamber. The sizing and configuration of the combustion chamber depends on the combined properties of the bulk gas stream and the properties of the PAH itself. In order to achieve efficient and complete combustion the residence time and minimum temperature need to be optimized. For this design approach reliable kinetic data of the individual PAH-species and/or groups thereof are necessary. However, kinetic data on the combustion of PAH are difficult to obtain and there is only a limited number of studies. The paper presents an evaluation of the kinetic data for some of the PAH obtained from literature. In the present study, the oxidation is modelled for pure PAH and also for PAH mixed with process gas. Using a perfectly stirred reactor modelling approach the oxidation is modelled including advanced reaction kinetics to study influence of residence time and temperature on the conversion of PAH to CO2 and water. A Chemical Reactor Network (CRN) approach is developed to understand the oxidation of PAH inside the combustion chamber. Chemical reactor network modeling has been found to be a valuable tool in the evaluation of oxidation behavior of PAH under various conditions.Keywords: PAH, PSR, energy recovery, ferro alloy furnace
Procedia PDF Downloads 27325213 Characterization of Polycyclic Aromatic Hydrocarbons in Ambient Air PM2.5 in an Urban Site of Győr, Hungary
Authors: A. Szabó Nagy, J. Szabó, Zs. Csanádi, J. Erdős
Abstract:
In Hungary, the measurement of ambient PM10-bound polycyclic aromatic hydrocarbon (PAH) concentrations is great importance for a number of reasons related to human health, the environment and compliance with European Union legislation. However, the monitoring of PAHs associated with PM2.5 aerosol fraction is still incomplete. Therefore, the main aim of this study was to investigate the concentration levels of PAHs in PM2.5 urban aerosol fraction. PM2.5 and associated PAHs were monitored in November 2014 in an urban site of Győr (Northwest Hungary). The aerosol samples were collected every day for 24-hours over two weeks with a high volume air sampler provided with a PM2.5 cut-off inlet. The levels of 19 PAH compounds associated with PM2.5 aerosol fraction were quantified by a gas chromatographic method. Polluted air quality for PM2.5 (>25 g/m3) was indicated in 50% of the collected samples. The total PAHs concentrations ranged from 2.1 to 37.3 ng/m3 with the mean value of 12.4 ng/m3. Indeno(123-cd)pyrene (IND) and sum of three benzofluoranthene isomers were the most dominant PAH species followed by benzo(ghi)perylene and benzo(a)pyrene (BaP). Using BaP-equivalent approach on the concentration data of carcinogenic PAH species, BaP, and IND contributed the highest carcinogenic exposure equivalent (1.50 and 0.24 ng/m3 on average). A selected number of concentration ratios of specific PAH compounds were calculated to evaluate the possible sources of PAH contamination. The ratios reflected that the major source of PAH compounds in the PM2.5 aerosol fraction of Győr during the study period was fossil fuel combustion from automobiles.Keywords: air, PM2.5, benzo(a)pyrene, polycyclic aromatic hydrocarbon
Procedia PDF Downloads 28125212 A Study of Barriers and Challenges Associated with Agriculture E-commerce in Afghanistan
Authors: Khwaja Bahman Qaderi, Noorullah Rafiqee
Abstract:
Background: With today's increasing Internet users, e-commerce has become a viable model for strengthening relationships between sellers, entrepreneurs, and consumers due to its speed, efficiency, and cost reduction. Agriculture is the economic backbone for 80 percent of the Afghan population. According to MCIT statistics, there are currently around 10 million internet users in Afghanistan. With this data, it was expected that Afghan people should have utilized e-commerce in their agricultural aspects, although it appears to be less used. Objective: This study examines the scope of e-commerce in Afghanistan's agriculture enterprises, how they harness the potential of internet users, and what obstacles they face in implementing e-commerce in their businesses. Method: The study distributed a 39-question questionnaire to agribusinesses in five different zones of Afghanistan. After extracting the responses and excluding the incomplete questionnaires, 280 were included in the analysis step to perform a non-parametric sign test. Result: E-commerce in Afghanistan faces four major political, economic, Internet, and technological obstacles, and no company in the country has implemented e-commerce. In addition, e-commerce is still in its infancy among agricultural companies in the country. Internet use is still primarily limited to email and sharing product images on Facebook & Instagram for advertising purposes. There are no companies that conduct international transactions via the Internet. Conclusion: This study contributes to knowing the challenges and barriers that the agriculture e-commerce faces in Afghanistan to find the effective solutions to use the capacity of internet users in the country and increase the sales rate of agricultural products through the Internet.Keywords: E-commerce, barriers and challenges, agriculture companies, Afghanistan
Procedia PDF Downloads 8925211 Comparison of Multivariate Adaptive Regression Splines and Random Forest Regression in Predicting Forced Expiratory Volume in One Second
Authors: P. V. Pramila , V. Mahesh
Abstract:
Pulmonary Function Tests are important non-invasive diagnostic tests to assess respiratory impairments and provides quantifiable measures of lung function. Spirometry is the most frequently used measure of lung function and plays an essential role in the diagnosis and management of pulmonary diseases. However, the test requires considerable patient effort and cooperation, markedly related to the age of patients esulting in incomplete data sets. This paper presents, a nonlinear model built using Multivariate adaptive regression splines and Random forest regression model to predict the missing spirometric features. Random forest based feature selection is used to enhance both the generalization capability and the model interpretability. In the present study, flow-volume data are recorded for N= 198 subjects. The ranked order of feature importance index calculated by the random forests model shows that the spirometric features FVC, FEF 25, PEF,FEF 25-75, FEF50, and the demographic parameter height are the important descriptors. A comparison of performance assessment of both models prove that, the prediction ability of MARS with the `top two ranked features namely the FVC and FEF 25 is higher, yielding a model fit of R2= 0.96 and R2= 0.99 for normal and abnormal subjects. The Root Mean Square Error analysis of the RF model and the MARS model also shows that the latter is capable of predicting the missing values of FEV1 with a notably lower error value of 0.0191 (normal subjects) and 0.0106 (abnormal subjects). It is concluded that combining feature selection with a prediction model provides a minimum subset of predominant features to train the model, yielding better prediction performance. This analysis can assist clinicians with a intelligence support system in the medical diagnosis and improvement of clinical care.Keywords: FEV, multivariate adaptive regression splines pulmonary function test, random forest
Procedia PDF Downloads 31025210 Evaluation of Role of Surgery in Management of Pediatric Germ Cell Tumors According to Risk Adapted Therapy Protocols
Authors: Ahmed Abdallatif
Abstract:
Background: Patients with malignant germ cell tumors have age distribution in two peaks, with the first one during infancy and the second after the onset of puberty. Gonadal germ cell tumors are the most common malignant ovarian tumor in females aged below twenty years. Sacrococcygeal and retroperitoneal abdominal tumors usually presents in a large size before the onset of symptoms. Methods: Patients with pediatric germ cell tumors presenting to Children’s Cancer Hospital Egypt and National Cancer Institute Egypt from January 2008 to June 2011 Patients underwent stratification according to risk into low, intermediate and high risk groups according to children oncology group classification. Objectives: Assessment of the clinicopathologic features of all cases of pediatric germ cell tumors and classification of malignant cases according to their stage, and the primary site to low, intermediate and high risk patients. Evaluation of surgical management in each group of patients focusing on surgical approach, the extent of surgical resection according to each site, ability to achieve complete surgical resection and perioperative complications. Finally, determination of the three years overall and disease-free survival in different groups and the relation to different prognostic factors including the extent of surgical resection. Results: Out of 131 cases surgically explored only 26 cases had re exploration with 8 cases explored for residual disease 9 cases for remote recurrence or metastatic disease and the other 9 cases for other complications. Patients with low risk kept under follow up after surgery, out of those of low risk group (48 patients) only 8 patients (16.5%) shifted to intermediate risk. There were 20 patients (14.6%) diagnosed as intermediate risk received 3 cycles of compressed (Cisplatin, Etoposide and Bleomycin) and all high risk group patients 69patients (50.4%) received chemotherapy. Stage of disease was strongly and significantly related to overall survival with a poorer survival in late stages (stage IV) as compared to earlier stages. Conclusion: Overall survival rate at 3 three years was (76.7% ± 5.4, 3) years EFS was (77.8 % ±4.0), however 3 years DFS was much better (89.8 ± 3.4) in whole study group with ovarian tumors had significantly higher Overall survival (90% ± 5.1). Event Free Survival analysis showed that Male gender was 3 times likely to have bad events than females. Patients who underwent incomplete resection were 4 times more than patients with complete resection to have bad events. Disease free survival analysis showed that Patients who underwent incomplete surgery were 18.8 times liable for recurrence compared to those who underwent complete surgery, and patients who were exposed to re-excision were 21 times more prone to recurrence compared to other patients.Keywords: extragonadal, germ cell tumors, gonadal, pediatric
Procedia PDF Downloads 21825209 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach
Authors: Theertha Chandroth
Abstract:
This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.Keywords: XML, JSON, data comparison, integration testing, Python, SQL
Procedia PDF Downloads 14025208 Using Machine Learning Techniques to Extract Useful Information from Dark Data
Authors: Nigar Hussain
Abstract:
It is a subset of big data. Dark data means those data in which we fail to use for future decisions. There are many issues in existing work, but some need powerful tools for utilizing dark data. It needs sufficient techniques to deal with dark data. That enables users to exploit their excellence, adaptability, speed, less time utilization, execution, and accessibility. Another issue is the way to utilize dark data to extract helpful information to settle on better choices. In this paper, we proposed upgrade strategies to remove the dark side from dark data. Using a supervised model and machine learning techniques, we utilized dark data and achieved an F1 score of 89.48%.Keywords: big data, dark data, machine learning, heatmap, random forest
Procedia PDF Downloads 2825207 Multi-Source Data Fusion for Urban Comprehensive Management
Authors: Bolin Hua
Abstract:
In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data
Procedia PDF Downloads 39325206 Reviewing Privacy Preserving Distributed Data Mining
Authors: Sajjad Baghernezhad, Saeideh Baghernezhad
Abstract:
Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.Keywords: data mining, distributed data mining, privacy protection, privacy preserving
Procedia PDF Downloads 52525205 The Right to Data Portability and Its Influence on the Development of Digital Services
Authors: Roman Bieda
Abstract:
The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.Keywords: data portability, digital market, GDPR, personal data
Procedia PDF Downloads 47325204 Applying Theory of Inventive Problem Solving to Develop Innovative Solutions: A Case Study
Authors: Y. H. Wang, C. C. Hsieh
Abstract:
Good service design can increase organization revenue and consumer satisfaction while reducing labor and time costs. The problems facing consumers in the original serve model for eyewear and optical industry includes the following issues: 1. Insufficient information on eyewear products 2. Passively dependent on recommendations, insufficient selection 3. Incomplete records on progression of vision conditions 4. Lack of complete customer records. This study investigates the case of Kobayashi Optical, applying the Theory of Inventive Problem Solving (TRIZ) to develop innovative solutions for eyewear and optical industry. Analysis results raise the following conclusions and management implications: In order to provide customers with improved professional information and recommendations, Kobayashi Optical is suggested to establish customer purchasing records. Overall service efficiency can be enhanced by applying data mining techniques to analyze past consumer preferences and purchase histories. Furthermore, Kobayashi Optical should continue to develop a 3D virtual trial service which can allow customers for easy browsing of different frame styles and colors. This 3D virtual trial service will save customer waiting times in during peak service times at stores.Keywords: theory of inventive problem solving (TRIZ), service design, augmented reality (AR), eyewear and optical industry
Procedia PDF Downloads 27925203 Safety Conditions Analysis of Scaffolding on Construction Sites
Authors: M. Pieńko, A. Robak, E. Błazik-Borowa, J. Szer
Abstract:
This paper presents the results of analysis of 100 full-scale scaffolding structures in terms of compliance with legal acts and safety of use. In 2016 and 2017, authors examined scaffolds in Poland located at buildings which were at construction or renovation stage. The basic elements affecting the safety of scaffolding use such as anchors, supports, platforms, guardrails and toe-boards have been taken into account. All of these elements were checked in each of considered scaffolding. Based on the analyzed scaffoldings, the most common errors concerning assembly process and use of scaffolding were collected. Legal acts on the scaffoldings are not always clear, and this causes many issues. In practice, people realize how dangerous the use of incomplete scaffolds is only when the accident occurs. Despite the fact that the scaffolding should ensure the safety of its users, most accidents on construction sites are caused by fall from a height.Keywords: façade scaffolds, load capacity, practice, safety of people
Procedia PDF Downloads 40325202 Phenomenology of Contemporary Cities: Abandoned Sites as Waiting Places, Bucharest, a Case Study
Authors: Luigi Pintacuda
Abstract:
What characterize the phenomenology of Bucharest is that all operations of modernization have never been completed, creating a city made up of fragments. Understood this fragmented nature, the traces and fractures, the acceptance of their scars must represent the basis for the design of development for Bucharest. From this insight comes a new analysis of this city: a city of two million inhabitants that does not need a project on an urban scale (as all other major projects for the city have failed), but, starting from the study of all these interstitial spaces of public property, it must find its own strategy, a strategy on a large-scale that reflects on the sites on an architectural one. It is a city composed by fragments, not waste, but places for the project: ‘waiting spaces’ for a possible continuation of the process of genesis of a city which is often incomplete.Keywords: public spaces, traces fractures, urban design, urban development
Procedia PDF Downloads 25125201 Recent Advances in Data Warehouse
Authors: Fahad Hanash Alzahrani
Abstract:
This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing
Procedia PDF Downloads 40425200 How to Use Big Data in Logistics Issues
Authors: Mehmet Akif Aslan, Mehmet Simsek, Eyup Sensoy
Abstract:
Big Data stands for today’s cutting-edge technology. As the technology becomes widespread, so does Data. Utilizing massive data sets enable companies to get competitive advantages over their adversaries. Out of many area of Big Data usage, logistics has significance role in both commercial sector and military. This paper lays out what big data is and how it is used in both military and commercial logistics.Keywords: big data, logistics, operational efficiency, risk management
Procedia PDF Downloads 64125199 Interface Problems in Construction Projects
Authors: Puti F. Marzuki, Adrianto Oktavianus, Almerinda Regina
Abstract:
Interface problems among interacting parties in Indonesian construction projects have most often led to low productivity and completion delay. In the midst of this country’s needs to accelerate construction of public infrastructure providing connectivity among regions and supporting economic growth as well as better living quality, project delays have to be seriously addressed. This paper identifies potential causes factors of interface problems experienced by construction projects in Indonesia. Data are collected through a survey involving the main actors of six important public infrastructure construction projects including railway, LRT, sports stadiums, apartment, and education building construction projects. Five of these projects adopt the design-build project delivery method and one applies the design-bid-build scheme. Interface problems’ potential causes are categorized into contract, management, technical experience, coordination, financial, and environmental factors. Research results reveal that, especially in railway and LRT projects, potential causes of interface problems are mainly technical and managerial in nature. These relate to complex construction execution in highly congested areas. Meanwhile, coordination cause factors are mainly found in the education building construction project with loan from a foreign donor. All of the six projects have to resolve interface problems caused by incomplete or low-quality contract documents. This research also shows that the design-bid-build delivery method involving more parties in construction projects tends to induce more interface problem cause factors than the design-build scheme.Keywords: cause factors, construction delays, project delivery method, contract documents
Procedia PDF Downloads 25525198 Beyond Information Failure and Misleading Beliefs in Conditional Cash Transfer Programs: A Qualitative Account of Structural Barriers Explaining Why the Poor Do Not Invest in Human Capital in Northern Mexico
Authors: Francisco Fernandez de Castro
Abstract:
The Conditional Cash Transfer (CCT) model gives monetary transfers to beneficiary families on the condition that they take specific education and health actions. According to the economic rationale of CCTs the poor need incentives to invest in their human capital because they are trapped by a lack of information and misleading beliefs. If left to their own decision, the poor will not be able to choose what is in their best interests. The basic assumption of the CCT model is that the poor need incentives to take care of their own education and health-nutrition. Due to the incentives (income cash transfers and conditionalities), beneficiary families are supposed to attend doctor visits and health talks. Children would stay in the school. These incentivized behaviors would produce outcomes such as better health and higher level of education, which in turn will reduce poverty. Based on a grounded theory approach to conduct a two-year period of qualitative data collection in northern Mexico, this study shows that this explanation is incomplete. In addition to the information failure and inadequate beliefs, there are structural barriers in everyday life of households that make health-nutrition and education investments difficult. In-depth interviews and observation work showed that the program takes for granted local conditions in which beneficiary families should fulfill their co-responsibilities. Data challenged the program’s assumptions and unveiled local obstacles not contemplated in the program’s design. These findings have policy and research implications for the CCT agenda. They bring elements for late programming due to the gap between the CCT strategy as envisioned by policy designers, and the program that beneficiary families experience on the ground. As for research consequences, these findings suggest new avenues for scholarly work regarding the causal mechanisms and social processes explaining CCT outcomes.Keywords: conditional cash transfers, incentives, poverty, structural barriers
Procedia PDF Downloads 11325197 Operative Technique of Glenoid Anteversion Osteotomy and Soft Tissue Rebalancing for Brachial Plexus Birth Palsy
Authors: Michael Zaidman, Naum Simanovsky
Abstract:
The most of brachial birth palsies are transient. Children with incomplete recovery almost always develop an internal rotation and adduction contracture. The muscle imbalance around the shoulder results in glenohumeral joint deformity and functional limitations. Natural history of glenohumeral deformity is it’s progression with worsening of function. Anteversion glenoid osteotomy with latissimus dorsi and teres major tendon transfers could be an alternative procedure of proximal humeral external rotation osteotomy for patients with severe glenohumeral dysplasia secondary to brachial plexus birth palsy. We will discuss pre-operative planning and stepped operative technique of the procedure on clinical example.Keywords: obstetric brachial plexus palsy, glenoid anteversion osteotomy, tendon transfer, operative technique
Procedia PDF Downloads 7425196 Modelling Exchange-Rate Pass-Through: A Model of Oil Prices and Asymmetric Exchange Rate Fluctuations in Selected African Countries
Authors: Fajana Sola Isaac
Abstract:
In the last two decades, we have witnessed an increased interest in exchange rate pass-through (ERPT) in developing economies and emerging markets. This is perhaps due to the acknowledged significance of the pattern of exchange rate pass-through as a key instrument in monetary policy design, principally in retort to a shock in exchange rate in literature. This paper analyzed Exchange Rate Pass-Through by A Model of Oil Prices and Asymmetric Exchange Rate Fluctuations in Selected African Countries. The study adopted A Non-Linear Autoregressive Distributed Lag approach using yearly data on Algeria, Burundi, Nigeria and South Africa from 1986 to 2022. The paper found asymmetry in exchange rate pass-through in net oil-importing and net oil-exporting countries in the short run during the period under review. An ERPT exhibited a complete pass-through in the short run in the case of net oil-importing countries but an incomplete pass-through in the case of the net oil-exporting countries that were examined. An extended result revealed a significant impact of oil price shock on exchange rate pass-through to domestic price in the long run only for net oil importing countries. The Wald restriction test also confirms the evidence of asymmetric with the role of oil price acting as an accelerator to exchange rate pass-through to domestic price in the countries examined. The study found the outcome to be very useful for gaining expansive knowledge on the external shock impact on ERPT and could be of critical value for national monetary policy decisions on inflation targeting, especially for countries examined and other developing net oil importers and exporters.Keywords: pass through, exchange rate, ARDL, monetary policy
Procedia PDF Downloads 7925195 A Study of Electrowetting-Assisted Mold Filling in Nanoimprint Lithography
Authors: Wei-Hsuan Hsu, Yi-Xuan Huang
Abstract:
Nanoimprint lithography (NIL) possesses the advantages of sub-10-nm feature and low cost. NIL patterns the resist with physical deformation using a mold, which can easily reproduce the required nano-scale pattern. However, the variation of process parameters and environmental conditions seriously affect reproduction quality. How to ensure the quality of imprinted pattern is essential for industry. In this study, the authors used the electrowetting technology to assist mold filling in the NIL process. A special mold structure was designed to cause electrowetting. During the imprinting process, when a voltage was applied between the mold and substrate, the hydrophilicity/hydrophobicity of the surface of the mold can be converted. Both simulation and experiment confirmed that the electrowetting technology can assist mold filling and avoid incomplete filling rate. The proposed method can also reduce the crack formation during the de-molding process. Therefore, electrowetting technology can improve the process quality of NIL.Keywords: electrowetting, mold filling, nano-imprint, surface modification
Procedia PDF Downloads 17225194 Implementation of an IoT Sensor Data Collection and Analysis Library
Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee
Abstract:
Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data
Procedia PDF Downloads 37825193 A Goal-Driven Crime Scripting Framework
Authors: Hashem Dehghanniri
Abstract:
Crime scripting is a simple and effective crime modeling technique that aims to improve understanding of security analysts about security and crime incidents. Low-quality scripts provide a wrong, incomplete, or sophisticated understanding of the crime commission process, which oppose the purpose of their application, e.g., identifying effective and cost-efficient situational crime prevention (SCP) measures. One important and overlooked factor in generating quality scripts is the crime scripting method. This study investigates the problems within the existing crime scripting practices and proposes a crime scripting approach that contributes to generating quality crime scripts. It was validated by experienced crime scripters. This framework helps analysts develop better crime scripts and contributes to their effective application, e.g., SCP measures identification or policy-making.Keywords: attack modelling, crime commission process, crime script, situational crime prevention
Procedia PDF Downloads 12625192 Government (Big) Data Ecosystem: Definition, Classification of Actors, and Their Roles
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Organizations, including governments, generate (big) data that are high in volume, velocity, veracity, and come from a variety of sources. Public Administrations are using (big) data, implementing base registries, and enforcing data sharing within the entire government to deliver (big) data related integrated services, provision of insights to users, and for good governance. Government (Big) data ecosystem actors represent distinct entities that provide data, consume data, manipulate data to offer paid services, and extend data services like data storage, hosting services to other actors. In this research work, we perform a systematic literature review. The key objectives of this paper are to propose a robust definition of government (big) data ecosystem and a classification of government (big) data ecosystem actors and their roles. We showcase a graphical view of actors, roles, and their relationship in the government (big) data ecosystem. We also discuss our research findings. We did not find too much published research articles about the government (big) data ecosystem, including its definition and classification of actors and their roles. Therefore, we lent ideas for the government (big) data ecosystem from numerous areas that include scientific research data, humanitarian data, open government data, industry data, in the literature.Keywords: big data, big data ecosystem, classification of big data actors, big data actors roles, definition of government (big) data ecosystem, data-driven government, eGovernment, gaps in data ecosystems, government (big) data, public administration, systematic literature review
Procedia PDF Downloads 16225191 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 13125190 Factors Contributing to Building Construction Project’s Cost Overrun in Jordan
Authors: Ghaleb Y. Abbasi, Sufyan Al-Mrayat
Abstract:
This study examined the contribution of thirty-six factors to building construction project’s cost overrun in Jordan. A questionnaire was distributed to a random sample of 350 stakeholders comprised of owners, consultants, and contractors, of which 285 responded. SPSS analysis was conducted to identify the top five causes of cost overrun, which were a large number of variation orders, inadequate quantities provided in the contract, misunderstanding of the project plan, incomplete bid documents, and choosing the lowest price in the contract bidding. There was an agreement among the study participants in ranking the factors contributing to cost overrun, which indicated that these factors were very commonly encountered in most construction projects in Jordan. Thus, it is crucial to enhance the collaboration among the different project stakeholders to understand the project’s objectives and set a realistic plan that takes into consideration all the factors that might influence the project cost, which might eventually prevent cost overrun.Keywords: cost, overrun, building construction projects, Jordan
Procedia PDF Downloads 10825189 Government Big Data Ecosystem: A Systematic Literature Review
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Data that is high in volume, velocity, veracity and comes from a variety of sources is usually generated in all sectors including the government sector. Globally public administrations are pursuing (big) data as new technology and trying to adopt a data-centric architecture for hosting and sharing data. Properly executed, big data and data analytics in the government (big) data ecosystem can be led to data-driven government and have a direct impact on the way policymakers work and citizens interact with governments. In this research paper, we conduct a systematic literature review. The main aims of this paper are to highlight essential aspects of the government (big) data ecosystem and to explore the most critical socio-technical factors that contribute to the successful implementation of government (big) data ecosystem. The essential aspects of government (big) data ecosystem include definition, data types, data lifecycle models, and actors and their roles. We also discuss the potential impact of (big) data in public administration and gaps in the government data ecosystems literature. As this is a new topic, we did not find specific articles on government (big) data ecosystem and therefore focused our research on various relevant areas like humanitarian data, open government data, scientific research data, industry data, etc.Keywords: applications of big data, big data, big data types. big data ecosystem, critical success factors, data-driven government, egovernment, gaps in data ecosystems, government (big) data, literature review, public administration, systematic review
Procedia PDF Downloads 22925188 A Machine Learning Decision Support Framework for Industrial Engineering Purposes
Authors: Anli Du Preez, James Bekker
Abstract:
Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.Keywords: Data analytics, Industrial engineering, Machine learning, Value creation
Procedia PDF Downloads 16825187 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 20625186 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 484