Search results for: data mining applications and discovery
26063 Improving Similarity Search Using Clustered Data
Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong
Abstract:
This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.Keywords: visual search, deep learning, convolutional neural network, machine learning
Procedia PDF Downloads 21926062 Systematic Review and Meta-Analysis of Mid-Term Survival, and Recurrent Mitral Regurgitation for Robotic-Assisted Mitral Valve Repair
Authors: Ramanen Sugunesegran, Michael L. Williams
Abstract:
Over the past two decades surgical approaches for mitral valve (MV) disease have evolved with the advent of minimally invasive techniques. Robotic mitral valve repair (RMVr) safety and efficacy has been well documented, however, mid- to long-term data are limited. The aim of this review was to provide a comprehensive analysis of the available mid- to long-term term data for RMVr. Electronic searches of five databases were performed to identify all relevant studies reporting minimum 5-year data on RMVr. Pre-defined primary outcomes of interest were overall survival, freedom from MV reoperation and freedom from moderate or worse mitral regurgitation (MR) at 5-years or more post-RMVr. A meta-analysis of proportions or means was performed, utilizing a random effects model, to present the data. Kaplan-Meier curves were aggregated using reconstructed individual patient data. Nine studies totaling 3,300 patients undergoing RMVr were identified. Rates of overall survival at 1-, 5- and 10-years were 99.2%, 97.4% and 92.3%, respectively. Freedom from MV reoperation at 8-years post RMVr was 95.0%. Freedom from moderate or worse MR at 7-years was 86.0%. Rates of early post-operative complications were low with only 0.2% all-cause mortality and 1.0% cerebrovascular accident. Reoperation for bleeding was low at 2.2% and successful RMVr was 99.8%. Mean intensive care unit and hospital stay were 22.4 hours and 5.2 days, respectively. RMVr is a safe procedure with low rates of early mortality and other complications. It can be performed with low complication rates in high volume, experienced centers. Evaluation of available mid-term data post-RMVr suggests favorable rates of overall survival, freedom from MV reoperation and freedom from moderate or worse MR recurrence.Keywords: mitral valve disease, mitral valve repair, robotic cardiac surgery, robotic mitral valve repair
Procedia PDF Downloads 8526061 Development of mHealth Information in Community Based on Geographical Information: A Case Study from Saraphi District, Chiang Mai, Thailand
Authors: Waraporn Boonchieng, Ekkarat Boonchieng, Wilawan Senaratana, Jaras Singkaew
Abstract:
Geographical information system (GIS) is a designated system widely used for collecting and analyzing geographical data. Since the introduction of ultra-mobile, 'smart' devices, investigators, clinicians, and even the general public have had powerful new tools for collecting, uploading and accessing information in the field. Epidemiology paired with GIS will increase the efficacy of preventive health care services. The objective of this study is to apply GPS location services that are available on the common mobile device with district health systems, storing data on our private cloud system. The mobile application has been developed for use on iOS, Android, and web-based platforms. The system consists of two parts of district health information, including recorded resident data forms and individual health recorded data forms, which were developed and approved by opinion sharing and public hearing. The application's graphical user interface was developed using HTML5 and PHP with MySQL as a database management system (DBMS). The reporting module of the developed software displays data in a variety of views, from traditional tables to various types of high-resolution, layered graphics, incorporating map location information with street views from Google Maps. Multi-extension exporting is also supported, utilizing standard platforms such as PDF, PNG, JPG, and XLS. The data were collected in the database beginning in March 2013, by district health volunteers and district youth volunteers who had completed the application training program. District health information consisted of patients’ household coordinates, individual health data, social and economic information. This was combined with Google Street View data, collected in March 2014. Studied groups consisted of 16,085 (67.87%) and 47,811 (59.87%) of the total 23,701 households and 79,855 people were collected by the system respectively, in Saraphi district, Chiang Mai Province. The report generated from the system has had a major benefit directly to the Saraphi District Hospital. Healthcare providers are able to use the basic health data to provide a specific home health care service and also to create health promotion activities according to medical needs of the people in the community.Keywords: health, public health, GIS, geographic information system
Procedia PDF Downloads 33926060 Design and Synthesis of Fully Benzoxazine-Based Porous Organic Polymer Through Sonogashira Coupling Reaction for CO₂ Capture and Energy Storage Application
Authors: Mohsin Ejaz, Shiao-Wei Kuo
Abstract:
The growing production and exploitation of fossil fuels have placed human society in serious environmental issues. As a result, it's critical to design efficient and eco-friendly energy production and storage techniques. Porous organic polymers (POPs) are multi-dimensional porous network materials developed through the formation of covalent bonds between different organic building blocks that possess distinct geometries and topologies. POPs have tunable porosities and high surface area making them a good candidate for an effective electrode material in energy storage applications. Herein, we prepared a fully benzoxazine-based porous organic polymers (TPA–DHTP–BZ POP) through sonogashira coupling of dihydroxyterephthalaldehyde (DHPT) and triphenylamine (TPA) containing benzoxazine (BZ) monomers. Firstly, both BZ monomers (TPA-BZ-Br and DHTP-BZ-Ea) were synthesized by three steps, including Schiff base, reduction, and mannich condensation reaction. Finally, the TPA–DHTP–BZ POP was prepared through the sonogashira coupling reaction of brominated monomer (TPA-BZ-Br) and ethynyl monomer (DHTP-BZ-Ea). Fourier transform infrared (FTIR) and solid-state nuclear magnetic resonance (NMR) spectroscopy confirmed the successful synthesis of monomers as well as POP. The porosity of TPA–DHTP–BZ POP was investigated by the N₂ absorption technique and showed a Brunauer–Emmett–Teller (BET) surface area of 196 m² g−¹, pore size 2.13 nm and pore volume of 0.54 cm³ g−¹, respectively. The TPA–DHTP–BZ POP experienced thermal ring-opening polymerization, resulting in poly (TPA–DHTP–BZ) POP having strong inter and intramolecular hydrogen bonds formed by phenolic groups and Mannich bridges, thereby enhancing CO₂ capture and supercapacitive performance. The poly(TPA–DHTP–BZ) POP demonstrated a remarkable CO₂ capture of 3.28 mmol g−¹ and a specific capacitance of 67 F g−¹ at 0.5 A g−¹. Thus, poly(TPA–DHTP–BZ) POP could potentially be used for energy storage and CO₂ capture applications.Keywords: porous organic polymer, benzoxazine, sonogashira coupling, CO₂, supercapacitor
Procedia PDF Downloads 7626059 Mitigation Measures for the Acid Mine Drainage Emanating from the Sabie Goldfield: Case Study of the Nestor Mine
Authors: Rudzani Lusunzi, Frans Waanders, Elvis Fosso-Kankeu, Robert Khashane Netshitungulwana
Abstract:
The Sabie Goldfield has a history of gold mining dating back more than a century. Acid mine drainage (AMD) from the Nestor mine tailings storage facility (MTSF) poses a serious threat to the nearby ecosystem, specifically the Sabie River system. This study aims at developing mitigation measures for the AMD emanating from the Nestor MTSF using materials from the Glynns Lydenburg MTSF. The Nestor MTSF (NM) and the Glynns Lydenburg MTSF (GM) each provided about 20 kg of bulk composite samples. Using samples from the Nestor MTSF and the Glynns Lydenburg MTSF, two mixtures were created. MIX-A is a mixture that contains 25% weight percent (GM) and 75% weight percent (NM). MIX-B is the name given to the second mixture, which contains 50% AN and 50% AG. The same static test, i.e., acid–base accounting (ABA), net acid generation (NAG), and acid buffering characteristics curve (ABCC) was used to estimate the acid-generating probabilities of samples NM and GM for MIX-A and MIX-B. Furthermore, the mineralogy of the Nestor MTSF samples consists of the primary acid-producing mineral pyrite as well as the secondary minerals ferricopiapite and jarosite, which are common in acidic conditions. The Glynns Lydenburg MTSF samples, on the other hand, contain primary acid-neutralizing minerals calcite and dolomite. Based on the assessment conducted, materials from the Glynns Lydenburg are capable of neutralizing AMD from Nestor MTSF. Therefore, the alkaline tailings materials from the Glynns Lydenburg MTSF can be used to rehabilitate the acidic Nestor MTSF.Keywords: Nestor Mine, acid mine drainage, mitigation, Sabie River system
Procedia PDF Downloads 9026058 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 4026057 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 11026056 Decision Making for Industrial Engineers: From Phenomenon to Value
Authors: Ali Abbas
Abstract:
Industrial Engineering is a broad multidisciplinary field with intersections and applications in numerous areas. In out current environment, the path from a phenomenon to value involves numerous people with expertise in various areas including domain knowledge of a field and the ability to make decisions within an operating environment that lead to value creation. We propose some skills that industrial engineering programs should focus on, and argue that an industrial engineer is a decision maker instead of a problem solver.Keywords: decision analysis, problem-solving, value creation, industrial engineering
Procedia PDF Downloads 37826055 Induction Machine Design Method for Aerospace Starter/Generator Applications and Parametric FE Analysis
Authors: Wang Shuai, Su Rong, K. J.Tseng, V. Viswanathan, S. Ramakrishna
Abstract:
The More-Electric-Aircraft concept in aircraft industry levies an increasing demand on the embedded starter/generators (ESG). The high-speed and high-temperature environment within an engine poses great challenges to the operation of such machines. In view of such challenges, squirrel cage induction machines (SCIM) have shown advantages due to its simple rotor structure, absence of temperature-sensitive components as well as low torque ripples etc. The tight operation constraints arising from typical ESG applications together with the detailed operation principles of SCIMs have been exploited to derive the mathematical interpretation of the ESG-SCIM design process. The resultant non-linear mathematical treatment yielded unique solution to the SCIM design problem for each configuration of pole pair number p, slots/pole/phase q and conductors/slot zq, easily implemented via loop patterns. It was also found that not all configurations led to feasible solutions and corresponding observations have been elaborated. The developed mathematical procedures also proved an effective framework for optimization among electromagnetic, thermal and mechanical aspects by allocating corresponding degree-of-freedom variables. Detailed 3D FEM analysis has been conducted to validate the resultant machine performance against design specifications. To obtain higher power ratings, electrical machines often have to increase the slot areas for accommodating more windings. Since the available space for embedding such machines inside an engine is usually short in length, axial air gap arrangement appears more appealing compared to its radial gap counterpart. The aforementioned approach has been adopted in case studies of designing series of AFIMs and RFIMs respectively with increasing power ratings. Following observations have been obtained. Under the strict rotor diameter limitation AFIM extended axially for the increased slot areas while RFIM expanded radially with the same axial length. Beyond certain power ratings AFIM led to long cylinder geometry while RFIM topology resulted in the desired short disk shape. Besides the different dimension growth patterns, AFIMs and RFIMs also exhibited dissimilar performance degradations regarding power factor, torque ripples as well as rated slip along with increased power ratings. Parametric response curves were plotted to better illustrate the above influences from increased power ratings. The case studies may provide a basic guideline that could assist potential users in making decisions between AFIM and RFIM for relevant applications.Keywords: axial flux induction machine, electrical starter/generator, finite element analysis, squirrel cage induction machine
Procedia PDF Downloads 45926054 A Transition Towards Sustainable Feed Production Using Algae: The Development of Algae Biotechnology in the Kingdom of Saudi Arabia (DAB-KSA Project)
Authors: Emna Mhedhbi, Claudio Fuentes Grunewald
Abstract:
According to preliminary results of DAB-KSA project and considering the current 0.09-ha microalgae pilot plant facilities, we can produce 2.6 tons/year of microalgae biomass for proteins applications in animal feeds in KSA. By 2030, our projections are to reach 65,940,593.4 tons deploying 100.000 ha's production plants. We also have assessed the energy cost (industrial) in KSA (€0.061/kWh) and compared to (€0.32/kWh)in Germany, we can argue a clear lower OPEX for microalgae biomass production cost in KSA.Keywords: microalgae, feed production, bioprocess, fishmeal
Procedia PDF Downloads 19826053 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions
Authors: Oscar E. Cariceo, Claudia V. Casal
Abstract:
Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.Keywords: cyberbullying, evidence based practice, machine learning, social work research
Procedia PDF Downloads 17226052 Herbal Medicines Used for the Cure of Jaundice among the Some Tribal Populations of Madhya Pradesh, India
Authors: Awdhesh Narayan Sharma
Abstract:
The use of herbal medicines for the cure of various ailments among the tribal population is as old as human origin itself. Most of the tribal populations of Madhya Pradesh inhabit in remote and inaccessible ecological setup. From long back, tribals and forests are interrelated to each other. They use an enormous range of wild plants for their basic needs and medicines. The tribal developed a unique understanding with wild plants, herbs, etc., and earned specialized knowledge of disease pattern and curative therapy-through hard experiences, common sense, trial, and error methods. They have passed this knowledge through traditions, taboos, totems, folklore by words of mouth from generation to generation. Here, an attempt has been made to study the possible aspects of herbal medicine for the cure of Jaundice among the tribal populations of Madhya Pradesh, India, through primary data as well as available secondary data. The data have been collected from the 305 Bharias of Patalkot, Madhya Pradesh, India, and included available secondary source of data by various investigators. It may be concluded that a sizable herbal medicinal plants' wealth exists in Madhya Pradesh, India, which still awaits for scientific exploration. The existing herbal medicines used for the cure of jaundice need an extensive investigation from the pharmaceutical point of view.Keywords: Bharias, herbal medicine, tribal, Madhya Pradesh
Procedia PDF Downloads 17926051 Characterization of Internet Exchange Points by Using Quantitative Data
Authors: Yamba Dabone, Tounwendyam Frédéric Ouedraogo, Pengwendé Justin Kouraogo, Oumarou Sie
Abstract:
Reliable data transport over the Internet is one of the goals of researchers in the field of computer science. Data such as videos and audio files are becoming increasingly large. As a result, transporting them over the Internet is becoming difficult. Therefore, it has been important to establish a method to locally interconnect autonomous systems (AS) with each other to facilitate traffic exchange. It is in this context that Internet Exchange Points (IXPs) are set up to facilitate local and even regional traffic. They are now the lifeblood of the Internet. Therefore, it is important to think about the factors that can characterize IXPs. However, other more quantifiable characteristics can help determine the quality of an IXP. In addition, these characteristics may allow ISPs to have a clearer view of the exchange node and may also convince other networks to connect to an IXP. To that end, we define five new IXP characteristics: the attraction rate (τₐₜₜᵣ); and the peering rate (τₚₑₑᵣ); the target rate of an IXP (Objₐₜₜ); the number of IXP links (Nₗᵢₙₖ); the resistance rate τₑ𝒻𝒻 and the attraction failure rate (τ𝒻).Keywords: characteristic, autonomous system, internet service provider, internet exchange point, rate
Procedia PDF Downloads 10126050 Statistic Regression and Open Data Approach for Identifying Economic Indicators That Influence e-Commerce
Authors: Apollinaire Barme, Simon Tamayo, Arthur Gaudron
Abstract:
This paper presents a statistical approach to identify explanatory variables linearly related to e-commerce sales. The proposed methodology allows specifying a regression model in order to quantify the relevance between openly available data (economic and demographic) and national e-commerce sales. The proposed methodology consists in collecting data, preselecting input variables, performing regressions for choosing variables and models, testing and validating. The usefulness of the proposed approach is twofold: on the one hand, it allows identifying the variables that influence e- commerce sales with an accessible approach. And on the other hand, it can be used to model future sales from the input variables. Results show that e-commerce is linearly dependent on 11 economic and demographic indicators.Keywords: e-commerce, statistical modeling, regression, empirical research
Procedia PDF Downloads 23026049 A Reasoning Method of Cyber-Attack Attribution Based on Threat Intelligence
Authors: Li Qiang, Yang Ze-Ming, Liu Bao-Xu, Jiang Zheng-Wei
Abstract:
With the increasing complexity of cyberspace security, the cyber-attack attribution has become an important challenge of the security protection systems. The difficult points of cyber-attack attribution were forced on the problems of huge data handling and key data missing. According to this situation, this paper presented a reasoning method of cyber-attack attribution based on threat intelligence. The method utilizes the intrusion kill chain model and Bayesian network to build attack chain and evidence chain of cyber-attack on threat intelligence platform through data calculation, analysis and reasoning. Then, we used a number of cyber-attack events which we have observed and analyzed to test the reasoning method and demo system, the result of testing indicates that the reasoning method can provide certain help in cyber-attack attribution.Keywords: reasoning, Bayesian networks, cyber-attack attribution, Kill Chain, threat intelligence
Procedia PDF Downloads 45526048 A Pre-Assessment Questionnaire to Identify Healthcare Professionals’ Perception on Information Technology Implementation
Authors: Y. Atilgan Şengül
Abstract:
Health information technologies promise higher quality, safer care and much more for both patients and professionals. Despite their promise, they are costly to develop and difficult to implement. On the other hand, user acceptance and usage determine the success of implemented information technology in healthcare. This study provides a model to understand health professionals’ perception and expectation of health information technology. Extensive literature review has been conducted to determine the main factors to be measured. A questionnaire has been designed as a measurement model and submitted to the personnel of an in vitro fertilization clinic. The respondents’ degree of agreement according to five-point Likert scale was 72% for convenient access to data and 69.4% for the importance of data security. There was a significant difference in acceptance of electronic data storage for female respondents. Also, other significant differences between professions were obtained.Keywords: healthcare, health informatics, medical record system, questionnaire
Procedia PDF Downloads 17726047 Validation of Electrical Field Effect on Electrostatic Desalter Modeling with Experimental Laboratory Data
Authors: Fatemeh Yazdanmehr, Iulian Nistor
Abstract:
The scope of the current study is the evaluation of the electric field effect on electrostatic desalting mathematical modeling with laboratory data. This research study was focused on developing a model for an existing operation desalting unit of one of the Iranian heavy oil field with a 75 MBPD production capacity. The high temperature of inlet oil to dehydration unit reduces the oil recovery, so the mathematical modeling of desalter operation parameters is very significant. The existing production unit operating data has been used for the accuracy of the mathematical desalting plant model. The inlet oil temperature to desalter was decreased from 110 to 80°C, and the desalted electrical field was increased from 0.75 to 2.5 Kv/cm. The model result shows that the desalter parameter changes meet the water-oil specification and also the oil production and consequently annual income is increased. In addition to that, changing desalter operation conditions reduces environmental footprint because of flare gas reduction. Following to specify the accuracy of selected electrostatic desalter electrical field, laboratory data has been used. Experimental data are used to ensure the effect of electrical field change on desalter. Therefore, the lab test is done on a crude oil sample. The results include the dehydration efficiency in the presence of a demulsifier and under electrical field (0.75 Kv) conditions at various temperatures. Comparing lab experimental and electrostatic desalter mathematical model results shows 1-3 percent acceptable error which confirms the validity of desalter specification and operation conditions changes.Keywords: desalter, electrical field, demulsification, mathematical modeling, water-oil separation
Procedia PDF Downloads 14626046 Framework to Quantify Customer Experience
Authors: Anant Sharma, Ashwin Rajan
Abstract:
Customer experience is measured today based on defining a set of metrics and KPIs, setting up thresholds and defining triggers across those thresholds. While this is an effective way of measuring against a Key Performance Indicator ( referred to as KPI in the rest of the paper ), this approach cannot capture the various nuances that make up the overall customer experience. Customers consume a product or service at various levels, which is not reflected in metrics like Customer Satisfaction or Net Promoter Score, but also across other measurements like recurring revenue, frequency of service usage, e-learning and depth of usage. Here we explore an alternative method of measuring customer experience by flipping the traditional views. Rather than rolling customers up to a metric, we roll up metrics to hierarchies and then measure customer experience. This method allows any team to quantify customer experience across multiple touchpoints in a customer’s journey. We make use of various data sources which contain information for metrics like CXSAT, NPS, Renewals, and depths of service usage collected across a customer lifecycle. This data can be mined systematically to get linkages between different data points like geographies, business groups, products and time. Additional views can be generated by blending synthetic contexts into the data to show trends and top/bottom types of reports. We have created a framework that allows us to measure customer experience using the above logic.Keywords: analytics, customers experience, BI, business operations, KPIs, metrics
Procedia PDF Downloads 8026045 Analysis of Noodle Production Process at Yan Hu Food Manufacturing: Basis for Production Improvement
Authors: Rhadinia Tayag-Relanes, Felina C. Young
Abstract:
This study was conducted to analyze the noodle production process at Yan Hu Food Manufacturing for the basis of production improvement. The study utilized the PDCA approach and record review in the gathering of data for the calendar year 2019 from August to October data of the noodle products miki, canton, and misua. Causal-comparative research was used in this study; it attempts to establish cause-effect relationships among the variables such as descriptive statistics and correlation, both were used to compute the data gathered. The study found that miki, canton, and misua production has different cycle time sets for each production and has different production outputs in every set of its production process and a different number of wastages. The company has not yet established its allowable rejection rate/ wastage; instead, this paper used a 1% wastage limit. The researcher recommended the following: machines used for each process of the noodle product must be consistently maintained and monitored; an assessment of all the production operators by checking their performance statistically based on the output and the machine performance; a root cause analysis for finding the solution must be conducted; and an improvement on the recording system of the input and output of the production process of noodle product should be established to eliminate the poor recording of data.Keywords: continuous improvement, process, operations, PDCA
Procedia PDF Downloads 7826044 Light-Weight Network for Real-Time Pose Estimation
Authors: Jianghao Hu, Hongyu Wang
Abstract:
The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU).Keywords: depthwise separable convolutions, feature pyramid network, human pose estimation, light-weight backbone
Procedia PDF Downloads 15626043 Modelling the Indonesian Goverment Securities Yield Curve Using Nelson-Siegel-Svensson and Support Vector Regression
Authors: Jamilatuzzahro, Rezzy Eko Caraka
Abstract:
The yield curve is the plot of the yield to maturity of zero-coupon bonds against maturity. In practice, the yield curve is not observed but must be extracted from observed bond prices for a set of (usually) incomplete maturities. There exist many methodologies and theory to analyze of yield curve. We use two methods (the Nelson-Siegel Method, the Svensson Method, and the SVR method) in order to construct and compare our zero-coupon yield curves. The objectives of this research were: (i) to study the adequacy of NSS model and SVR to Indonesian government bonds data, (ii) to choose the best optimization or estimation method for NSS model and SVR. To obtain that objective, this research was done by the following steps: data preparation, cleaning or filtering data, modeling, and model evaluation.Keywords: support vector regression, Nelson-Siegel-Svensson, yield curve, Indonesian government
Procedia PDF Downloads 24926042 Compact 3-D Co-Planar Waveguide Fed Dual-Port Ultrawideband-Multiple-Input and Multiple-Output Antenna with WLAN Band-Notched Characteristics
Authors: Asim Quddus
Abstract:
A miniaturized three dimensional co-planar waveguide (CPW) two-port MIMO antenna, exhibiting high isolation and WLAN band-notched characteristics is presented in this paper for ultrawideband (UWB) communication applications. The microstrip patch antenna operates as a single UWB antenna element. The proposed design is a cuboid-shaped structure having compact size of 35 x 27 x 45 mm³. Radiating as well as decoupling structure is placed around cuboidal polystyrene sheet. The radiators are 27 mm apart, placed Face-to-Face in vertical direction. Decoupling structure is placed on the side walls of polystyrene. The proposed antenna consists of an oval shaped radiating patch. A rectangular structure with fillet edges is placed on ground plan to enhance the bandwidth. The proposed antenna exhibits a good impedance match (S11 ≤ -10 dB) over frequency band of 2 GHz – 10.6 GHz. A circular slotted structure is employed as a decoupling structure on substrate, and it is placed on the side walls of polystyrene to enhance the isolation between antenna elements. Moreover, to achieve immunity from WLAN band distortion, a modified, inverted crescent shaped slotted structure is etched on radiating patches to achieve band-rejection characteristics at WLAN frequency band 4.8 GHz – 5.2 GHz. The suggested decoupling structure provides isolation better than 15 dB over the desired UWB spectrum. The envelope correlation coefficient (ECC) and gain for the MIMO antenna are analyzed as well. Finite Element Method (FEM) simulations are carried out in Ansys High Frequency Structural Simulator (HFSS) for the proposed design. The antenna is realized on a Rogers RT/duroid 5880 with thickness 1 mm, relative permittivity ɛr = 2.2. The proposed antenna achieves a stable omni-directional radiation patterns as well, while providing rejection at desired WLAN band. The S-parameters as well as MIMO parameters like ECC are analyzed and the results show conclusively that the design is suitable for portable MIMO-UWB applications.Keywords: 3-D antenna, band-notch, MIMO, UWB
Procedia PDF Downloads 29926041 Influencers of E-Learning Readiness among Palestinian Secondary School Teachers: An Explorative Study
Authors: Fuad A. A. Trayek, Tunku Badariah Tunku Ahmad, Mohamad Sahari Nordin, Mohammed AM Dwikat
Abstract:
This paper reports on the results of an exploratory factor analysis procedure applied on the e-learning readiness data obtained from a survey of four hundred and seventy-nine (N = 479) teachers from secondary schools in Nablus, Palestine. The data were drawn from a 23-item Likert questionnaire measuring e-learning readiness based on Chapnick's conception of the construct. Principal axis factoring (PAF) with Promax rotation applied on the data extracted four distinct factors supporting four of Chapnick's e-learning readiness dimensions, namely technological readiness, psychological readiness, infrastructure readiness and equipment readiness. Together these four dimensions explained 56% of the variance. These findings provide further support for the construct validity of the items and for the existence of these four factors that measure e-learning readiness.Keywords: e-learning, e-learning readiness, technological readiness, psychological readiness, principal axis factoring
Procedia PDF Downloads 40426040 Analysis of Composite Health Risk Indicators Built at a Regional Scale and Fine Resolution to Detect Hotspot Areas
Authors: Julien Caudeville, Muriel Ismert
Abstract:
Analyzing the relationship between environment and health has become a major preoccupation for public health as evidenced by the emergence of the French national plans for health and environment. These plans have identified the following two priorities: (1) to identify and manage geographic areas, where hotspot exposures are suspected to generate a potential hazard to human health; (2) to reduce exposure inequalities. At a regional scale and fine resolution of exposure outcome prerequisite, environmental monitoring networks are not sufficient to characterize the multidimensionality of the exposure concept. In an attempt to increase representativeness of spatial exposure assessment approaches, risk composite indicators could be built using additional available databases and theoretical framework approaches to combine factor risks. To achieve those objectives, combining data process and transfer modeling with a spatial approach is a fundamental prerequisite that implies the need to first overcome different scientific limitations: to define interest variables and indicators that could be built to associate and describe the global source-effect chain; to link and process data from different sources and different spatial supports; to develop adapted methods in order to improve spatial data representativeness and resolution. A GIS-based modeling platform for quantifying human exposure to chemical substances (PLAINE: environmental inequalities analysis platform) was used to build health risk indicators within the Lorraine region (France). Those indicators combined chemical substances (in soil, air and water) and noise risk factors. Tools have been developed using modeling, spatial analysis and geostatistic methods to build and discretize interest variables from different supports and resolutions on a 1 km2 regular grid within the Lorraine region. By example, surface soil concentrations have been estimated by developing a Kriging method able to integrate surface and point spatial supports. Then, an exposure model developed by INERIS was used to assess the transfer from soil to individual exposure through ingestion pathways. We used distance from polluted soil site to build a proxy for contaminated site. Air indicator combined modeled concentrations and estimated emissions to take in account 30 polluants in the analysis. For water, drinking water concentrations were compared to drinking water standards to build a score spatialized using a distribution unit serve map. The Lden (day-evening-night) indicator was used to map noise around road infrastructures. Aggregation of the different factor risks was made using different methodologies to discuss weighting and aggregation procedures impact on the effectiveness of risk maps to take decisions for safeguarding citizen health. Results permit to identify pollutant sources, determinants of exposure, and potential hotspots areas. A diagnostic tool was developed for stakeholders to visualize and analyze the composite indicators in an operational and accurate manner. The designed support system will be used in many applications and contexts: (1) mapping environmental disparities throughout the Lorraine region; (2) identifying vulnerable population and determinants of exposure to set priorities and target for pollution prevention, regulation and remediation; (3) providing exposure database to quantify relationships between environmental indicators and cancer mortality data provided by French Regional Health Observatories.Keywords: health risk, environment, composite indicator, hotspot areas
Procedia PDF Downloads 25226039 Effect of Graphene on the Structural and Optical Properties of Ceria:Graphene Nanocomposites
Authors: R. Udayabhaskar, R. V. Mangalaraja, V. T. Perarasu, Saeed Farhang Sahlevani, B. Karthikeyan, David Contreras
Abstract:
Bandgap engineering of CeO₂ nanocrystals is of high interest for many research groups to meet the requirement of desired applications. The band gap of CeO₂ nanostructures can be modified by varying the particle size, morphology and dopants. Anchoring the metal oxide nanostructures on graphene sheets will result in composites with improved properties than the parent materials. The presence of graphene sheets will acts a support for the growth, influences the morphology and provides external paths for electronic transitions. Thus, the controllable synthesis of ceria:graphene composites with various morphologies and the understanding of the optical properties is highly important for the usage of these materials in various applications. The development of ceria and ceria:graphene composites with low cost, rapid synthesis with tunable optical properties is still desirable. By this work, we discuss the synthesis of pure ceria (nanospheres) and ceria:graphene composites (nano-rice like morphology) by using commercial microwave oven as a cost effective and environmentally friendly approach. The influence of the graphene on the crystallinity, morphology, band gap and luminescence of the synthesized samples were analyzed. The average crystallite size obtained by using Scherrer formula of the CeO₂ nanostructures showed a decreasing trend with increasing the graphene loading. The higher graphene loaded ceria composite clearly depicted morphology of nano-rice like in shape with the diameter below 10 nm and the length over 50 nm. The presence of graphene and ceria related vibrational modes (100-4000 cm⁻¹) confirmed the successful formation of composites. We observed an increase in band gap (blue shift) with increasing loading amount of graphene. Further, the luminescence related to various F-centers was quenched in the composites. The authors gratefully acknowledge the FONDECYT Project No.: 3160142 and BECA Conicyt National Doctorado2017 No. 21170851 Government of Chile, Santiago, for the financial assistance.Keywords: ceria, graphene, luminescence, blue shift, band gap widening
Procedia PDF Downloads 19626038 Teaching Translation during Covid-19 Outbreak: Challenges and Discoveries
Authors: Rafat Alwazna
Abstract:
Translation teaching is a particular activity that includes translators and interpreters training either inside or outside institutionalised settings, such as universities. It can also serve as a means of teaching other fields, such as foreign languages. Translation teaching began in the twentieth century. Teachers of translation hold the responsibilities of educating students, developing their translation competence and training them to be professional translators. The activity of translation teaching involves various tasks, including curriculum design, course delivery, material writing as well as application and implementation. The present paper addresses translation teaching during COVID-19 outbreak, seeking to find out the challenges encountered by translation teachers in online translation teaching and the discoveries/solutions arrived at to resolve them. The paper makes use of a comprehensive questionnaire, containing closed-ended and open-ended questions to elicit both quantitative as well as qualitative data from about sixty translation teachers who have been teaching translation at BA and MA levels during COVID-19 outbreak. The data shows that about 40% of the participants evaluate their online translation teaching experience during COVID-19 outbreak as enjoyable and exhilarating. On the contrary, no participant has evaluated his/her online translation teaching experience as being not good, nor has any participant evaluated his/her online translation teaching experience as being terrible. The data also presents that about 23.33% of the participants evaluate their online translation teaching experience as very good, and the same percentage applies to those who evaluate their online translation teaching experience as good to some extent. Moreover, the data indicates that around 13.33% of the participants evaluate their online translation teaching experience as good. The data also demonstrates that the majority of the participants have encountered obstacles in online translation teaching and have concurrently proposed solutions to resolve them.Keywords: online translation teaching, electronic learning platform, COVID-19 outbreak, challenges, solutions
Procedia PDF Downloads 22726037 Load Forecasting Using Neural Network Integrated with Economic Dispatch Problem
Authors: Mariyam Arif, Ye Liu, Israr Ul Haq, Ahsan Ashfaq
Abstract:
High cost of fossil fuels and intensifying installations of alternate energy generation sources are intimidating main challenges in power systems. Making accurate load forecasting an important and challenging task for optimal energy planning and management at both distribution and generation side. There are many techniques to forecast load but each technique comes with its own limitation and requires data to accurately predict the forecast load. Artificial Neural Network (ANN) is one such technique to efficiently forecast the load. Comparison between two different ranges of input datasets has been applied to dynamic ANN technique using MATLAB Neural Network Toolbox. It has been observed that selection of input data on training of a network has significant effects on forecasted results. Day-wise input data forecasted the load accurately as compared to year-wise input data. The forecasted load is then distributed among the six generators by using the linear programming to get the optimal point of generation. The algorithm is then verified by comparing the results of each generator with their respective generation limits.Keywords: artificial neural networks, demand-side management, economic dispatch, linear programming, power generation dispatch
Procedia PDF Downloads 19326036 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 3726035 Innovative Power Engineering in a Selected Rural Commune
Authors: Pawel Sowa, Joachim Bargiel
Abstract:
This paper presents modern solutions of distributed generation in rural communities aiming at the improvement of energy and environmental security, as well as power supply reliability to important customers (e.g. health care, sensitive consumer required continuity). Distributed sources are mainly gas and biogas cogeneration units, as well as wind and photovoltaic sources. Some examples of their applications in a selected Silesian community are given.Keywords: energy security, mini energy centres , power engineering, power supply reliability
Procedia PDF Downloads 30326034 Methylene Blue Removal Using NiO nanoparticles-Sand Adsorption Packed Bed
Authors: Nedal N. Marei, Nashaat Nassar
Abstract:
Many treatment techniques have been used to remove the soluble pollutants from wastewater as; dyes and metal ions which could be found in rich amount in the used water of the textile and tanneries industry. The effluents from these industries are complex, containing a wide variety of dyes and other contaminants, such as dispersants, acids, bases, salts, detergents, humectants, oxidants, and others. These techniques can be divided into physical, chemical, and biological methods. Adsorption has been developed as an efficient method for the removal of heavy metals from contaminated water and soil. It is now recognized as an effective method for the removal of both organic and inorganic pollutants from wastewaters. Nanosize materials are new functional materials, which offer high surface area and have come up as effective adsorbents. Nano alumina is one of the most important ceramic materials widely used as an electrical insulator, presenting exceptionally high resistance to chemical agents, as well as giving excellent performance as a catalyst for many chemical reactions, in microelectronic, membrane applications, and water and wastewater treatment. In this study, methylene blue (MB) dye has been used as model dye of textile wastewater in order to synthesize a synthetic MB wastewater. NiO nanoparticles were added in small percentage in the sand packed bed adsorption columns to remove the MB from the synthetic textile wastewater. Moreover, different parameters have been evaluated; flow of the synthetic wastewater, pH, height of the bed, percentage of the NiO to the sand in the packed material. Different mathematical models where employed to find the proper model which describe the experimental data and help to analyze the mechanism of the MB adsorption. This study will provide good understanding of the dyes adsorption using metal oxide nanoparticles in the classical sand bed.Keywords: adsorption, column, nanoparticles, methylene
Procedia PDF Downloads 272