Search results for: ecological binary data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25849

Search results for: ecological binary data

23269 Systematic Review and Meta-Analysis of Mid-Term Survival, and Recurrent Mitral Regurgitation for Robotic-Assisted Mitral Valve Repair

Authors: Ramanen Sugunesegran, Michael L. Williams

Abstract:

Over the past two decades surgical approaches for mitral valve (MV) disease have evolved with the advent of minimally invasive techniques. Robotic mitral valve repair (RMVr) safety and efficacy has been well documented, however, mid- to long-term data are limited. The aim of this review was to provide a comprehensive analysis of the available mid- to long-term term data for RMVr. Electronic searches of five databases were performed to identify all relevant studies reporting minimum 5-year data on RMVr. Pre-defined primary outcomes of interest were overall survival, freedom from MV reoperation and freedom from moderate or worse mitral regurgitation (MR) at 5-years or more post-RMVr. A meta-analysis of proportions or means was performed, utilizing a random effects model, to present the data. Kaplan-Meier curves were aggregated using reconstructed individual patient data. Nine studies totaling 3,300 patients undergoing RMVr were identified. Rates of overall survival at 1-, 5- and 10-years were 99.2%, 97.4% and 92.3%, respectively. Freedom from MV reoperation at 8-years post RMVr was 95.0%. Freedom from moderate or worse MR at 7-years was 86.0%. Rates of early post-operative complications were low with only 0.2% all-cause mortality and 1.0% cerebrovascular accident. Reoperation for bleeding was low at 2.2% and successful RMVr was 99.8%. Mean intensive care unit and hospital stay were 22.4 hours and 5.2 days, respectively. RMVr is a safe procedure with low rates of early mortality and other complications. It can be performed with low complication rates in high volume, experienced centers. Evaluation of available mid-term data post-RMVr suggests favorable rates of overall survival, freedom from MV reoperation and freedom from moderate or worse MR recurrence.

Keywords: mitral valve disease, mitral valve repair, robotic cardiac surgery, robotic mitral valve repair

Procedia PDF Downloads 77
23268 Development of mHealth Information in Community Based on Geographical Information: A Case Study from Saraphi District, Chiang Mai, Thailand

Authors: Waraporn Boonchieng, Ekkarat Boonchieng, Wilawan Senaratana, Jaras Singkaew

Abstract:

Geographical information system (GIS) is a designated system widely used for collecting and analyzing geographical data. Since the introduction of ultra-mobile, 'smart' devices, investigators, clinicians, and even the general public have had powerful new tools for collecting, uploading and accessing information in the field. Epidemiology paired with GIS will increase the efficacy of preventive health care services. The objective of this study is to apply GPS location services that are available on the common mobile device with district health systems, storing data on our private cloud system. The mobile application has been developed for use on iOS, Android, and web-based platforms. The system consists of two parts of district health information, including recorded resident data forms and individual health recorded data forms, which were developed and approved by opinion sharing and public hearing. The application's graphical user interface was developed using HTML5 and PHP with MySQL as a database management system (DBMS). The reporting module of the developed software displays data in a variety of views, from traditional tables to various types of high-resolution, layered graphics, incorporating map location information with street views from Google Maps. Multi-extension exporting is also supported, utilizing standard platforms such as PDF, PNG, JPG, and XLS. The data were collected in the database beginning in March 2013, by district health volunteers and district youth volunteers who had completed the application training program. District health information consisted of patients’ household coordinates, individual health data, social and economic information. This was combined with Google Street View data, collected in March 2014. Studied groups consisted of 16,085 (67.87%) and 47,811 (59.87%) of the total 23,701 households and 79,855 people were collected by the system respectively, in Saraphi district, Chiang Mai Province. The report generated from the system has had a major benefit directly to the Saraphi District Hospital. Healthcare providers are able to use the basic health data to provide a specific home health care service and also to create health promotion activities according to medical needs of the people in the community.

Keywords: health, public health, GIS, geographic information system

Procedia PDF Downloads 324
23267 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 7
23266 Integrated Planning, Designing, Development and Management of Eco-Friendly Human Settlements for Sustainable Development of Environment, Economic, Peace and Society of All Economies

Authors: Indra Bahadur Chand

Abstract:

This paper will focus on the need for development and application of global protocols and policy in planning, designing, development, and management of systems of eco-towns and eco-villages so that sustainable development will be assured from the perspective of environmental, economical, peace, and harmonized social dynamics. This perspective is essential for the development of civilized and eco-friendly human settlements in the town and rural areas of the nation that will be a milestone for developing a happy and sustainable lifestyle of rural and urban communities of the nation. The urban population of most of the town of developing economies has been tremendously increasing, whereas rural people have been tremendously migrating for the past three decades. Consequently, the urban lifestyle in most towns has stressed in terms of environmental pollution, water crisis, congested traffic, energy crisis, food crisis, and unemployment. Eco-towns and villages should be developed where lifestyle of all residents is sustainable and happy. Built up environment of settlement should reduce and minimize the problems of non ecological CO2 emissions, unbalanced utilization of natural resources, environmental degradation, natural calamities, ecological imbalance, energy crisis, water scarcity, waste management, food crisis, unemployment, deterioration of cultural heritage, social, the ratio among the public and private land ownership, ratio of land covered with vegetation and area of settlement, the ratio of people in the vehicles and foot, the ratio of people employed outside of town and village, ratio of resources recycling of waste materials, water consumption level, the ratio of people and vehicles, ratio of the length of the road network and area of town/villages, a ratio of renewable energy consumption with total energy, a ratio of religious/recreational area out of the total built-up area, the ratio of annual suicide case out of total people, a ratio of annual injured and death out of total people from a traffic accident, a ratio of production of agro foods within town out of total food consumption will be used to assist in designing and monitoring of each eco-towns and villages. An eco-town and villages should be planned and developed to offer sustainable infrastructure and utilities that maintain CO2 level in individual homes and settlements, home energy use, transport, food and consumer goods, water supply, waste management, conservation of historical heritages, healthy neighborhood, conservation of natural landscape, conserving bio-diversity and developing green infrastructures. Eco-towns and villages should be developed on the basis of master planning and architecture that affect and define the settlement and its form. Master planning and engineering should focus in delivering the sustainability criteria of eco towns and eco village. This will involve working with specific landscape and natural resources of locality.

Keywords: eco-town, ecological habitation, master plan, sustainable development

Procedia PDF Downloads 172
23265 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 96
23264 The Influence of Ecologically -Valid High- and Low-Volume Resistance Training on Muscle Strength and Size in Trained Men

Authors: Jason Dellatolla, Scott Thomas

Abstract:

Much of the current literature pertaining to resistance training (RT) volume prescription lacks ecological validity, and very few studies investigate true high-volume ranges. Purpose: The present study sought to investigate the effects of ecologically-valid high- vs low-volume RT on muscular size and strength in trained men. Methods: This study systematically randomized trained, college-aged men into two groups: low-volume (LV; n = 4) and high-volume (HV; n = 5). The sample size was affected by COVID-19 limitations. Subjects followed an ecologically-valid 6-week RT program targeting both muscle size and strength. RT occurred 3x/week on non-consecutive days. Over the course of six weeks, LVR and HVR gradually progressed from 15 to 23 sets/week and 30 to 46 sets/week of lower-body RT, respectively. Muscle strength was assessed via 3RM tests in the squat, stiff-leg deadlift (SL DL), and leg press. Muscle hypertrophy was evaluated through a combination of DXA, BodPod, and ultrasound (US) measurements. Results: Two-way repeated-measures ANOVAs indicated that strength in all 3 compound lifts increased significantly among both groups (p < 0.01); between-group differences only occurred in the squat (p = 0.02) and SL DL (p = 0.03), both of which favored HVR. Significant pre-to-post-study increases in indicators of hypertrophy were discovered for lean body mass in the legs via DXA, overall fat-free mass via BodPod, and US measures of muscle thickness (MT) for the rectus femoris, vastus intermedius, vastus medialis, vastus lateralis, long-head of the biceps femoris, and total MT. Between-group differences were only found for MT of the vastus medialis – favoring HVR. Moreover, each additional weekly set of lower-body RT was associated with an average increase in MT of 0.39% in the thigh muscles. Conclusion: We conclude that ecologically-valid RT regimens significantly improve muscular strength and indicators of hypertrophy. When HVR is compared to LVR, HVR provides significantly greater gains in muscular strength but has no greater effect on hypertrophy over the course of 6 weeks in trained, college-aged men.

Keywords: ecological validity, hypertrophy, resistance training, strength

Procedia PDF Downloads 106
23263 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions

Authors: Oscar E. Cariceo, Claudia V. Casal

Abstract:

Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.

Keywords: cyberbullying, evidence based practice, machine learning, social work research

Procedia PDF Downloads 161
23262 Characterization of Internet Exchange Points by Using Quantitative Data

Authors: Yamba Dabone, Tounwendyam Frédéric Ouedraogo, Pengwendé Justin Kouraogo, Oumarou Sie

Abstract:

Reliable data transport over the Internet is one of the goals of researchers in the field of computer science. Data such as videos and audio files are becoming increasingly large. As a result, transporting them over the Internet is becoming difficult. Therefore, it has been important to establish a method to locally interconnect autonomous systems (AS) with each other to facilitate traffic exchange. It is in this context that Internet Exchange Points (IXPs) are set up to facilitate local and even regional traffic. They are now the lifeblood of the Internet. Therefore, it is important to think about the factors that can characterize IXPs. However, other more quantifiable characteristics can help determine the quality of an IXP. In addition, these characteristics may allow ISPs to have a clearer view of the exchange node and may also convince other networks to connect to an IXP. To that end, we define five new IXP characteristics: the attraction rate (τₐₜₜᵣ); and the peering rate (τₚₑₑᵣ); the target rate of an IXP (Objₐₜₜ); the number of IXP links (Nₗᵢₙₖ); the resistance rate τₑ𝒻𝒻 and the attraction failure rate (τ𝒻).

Keywords: characteristic, autonomous system, internet service provider, internet exchange point, rate

Procedia PDF Downloads 85
23261 Statistic Regression and Open Data Approach for Identifying Economic Indicators That Influence e-Commerce

Authors: Apollinaire Barme, Simon Tamayo, Arthur Gaudron

Abstract:

This paper presents a statistical approach to identify explanatory variables linearly related to e-commerce sales. The proposed methodology allows specifying a regression model in order to quantify the relevance between openly available data (economic and demographic) and national e-commerce sales. The proposed methodology consists in collecting data, preselecting input variables, performing regressions for choosing variables and models, testing and validating. The usefulness of the proposed approach is twofold: on the one hand, it allows identifying the variables that influence e- commerce sales with an accessible approach. And on the other hand, it can be used to model future sales from the input variables. Results show that e-commerce is linearly dependent on 11 economic and demographic indicators.

Keywords: e-commerce, statistical modeling, regression, empirical research

Procedia PDF Downloads 214
23260 A Reasoning Method of Cyber-Attack Attribution Based on Threat Intelligence

Authors: Li Qiang, Yang Ze-Ming, Liu Bao-Xu, Jiang Zheng-Wei

Abstract:

With the increasing complexity of cyberspace security, the cyber-attack attribution has become an important challenge of the security protection systems. The difficult points of cyber-attack attribution were forced on the problems of huge data handling and key data missing. According to this situation, this paper presented a reasoning method of cyber-attack attribution based on threat intelligence. The method utilizes the intrusion kill chain model and Bayesian network to build attack chain and evidence chain of cyber-attack on threat intelligence platform through data calculation, analysis and reasoning. Then, we used a number of cyber-attack events which we have observed and analyzed to test the reasoning method and demo system, the result of testing indicates that the reasoning method can provide certain help in cyber-attack attribution.

Keywords: reasoning, Bayesian networks, cyber-attack attribution, Kill Chain, threat intelligence

Procedia PDF Downloads 440
23259 A Pre-Assessment Questionnaire to Identify Healthcare Professionals’ Perception on Information Technology Implementation

Authors: Y. Atilgan Şengül

Abstract:

Health information technologies promise higher quality, safer care and much more for both patients and professionals. Despite their promise, they are costly to develop and difficult to implement. On the other hand, user acceptance and usage determine the success of implemented information technology in healthcare. This study provides a model to understand health professionals’ perception and expectation of health information technology. Extensive literature review has been conducted to determine the main factors to be measured. A questionnaire has been designed as a measurement model and submitted to the personnel of an in vitro fertilization clinic. The respondents’ degree of agreement according to five-point Likert scale was 72% for convenient access to data and 69.4% for the importance of data security. There was a significant difference in acceptance of electronic data storage for female respondents. Also, other significant differences between professions were obtained.

Keywords: healthcare, health informatics, medical record system, questionnaire

Procedia PDF Downloads 161
23258 Validation of Electrical Field Effect on Electrostatic Desalter Modeling with Experimental Laboratory Data

Authors: Fatemeh Yazdanmehr, Iulian Nistor

Abstract:

The scope of the current study is the evaluation of the electric field effect on electrostatic desalting mathematical modeling with laboratory data. This research study was focused on developing a model for an existing operation desalting unit of one of the Iranian heavy oil field with a 75 MBPD production capacity. The high temperature of inlet oil to dehydration unit reduces the oil recovery, so the mathematical modeling of desalter operation parameters is very significant. The existing production unit operating data has been used for the accuracy of the mathematical desalting plant model. The inlet oil temperature to desalter was decreased from 110 to 80°C, and the desalted electrical field was increased from 0.75 to 2.5 Kv/cm. The model result shows that the desalter parameter changes meet the water-oil specification and also the oil production and consequently annual income is increased. In addition to that, changing desalter operation conditions reduces environmental footprint because of flare gas reduction. Following to specify the accuracy of selected electrostatic desalter electrical field, laboratory data has been used. Experimental data are used to ensure the effect of electrical field change on desalter. Therefore, the lab test is done on a crude oil sample. The results include the dehydration efficiency in the presence of a demulsifier and under electrical field (0.75 Kv) conditions at various temperatures. Comparing lab experimental and electrostatic desalter mathematical model results shows 1-3 percent acceptable error which confirms the validity of desalter specification and operation conditions changes.

Keywords: desalter, electrical field, demulsification, mathematical modeling, water-oil separation

Procedia PDF Downloads 126
23257 Isolation Preserving Medical Conclusion Hold Structure via C5 Algorithm

Authors: Swati Kishor Zode, Rahul Ambekar

Abstract:

Data mining is the extraction of fascinating examples on the other hand information from enormous measure of information and choice is made as indicated by the applicable information extracted. As of late, with the dangerous advancement in internet, stockpiling of information and handling procedures, privacy preservation has been one of the major (higher) concerns in data mining. Various techniques and methods have been produced for protection saving data mining. In the situation of Clinical Decision Support System, the choice is to be made on the premise of the data separated from the remote servers by means of Internet to diagnose the patient. In this paper, the fundamental thought is to build the precision of Decision Support System for multiple diseases for different maladies and in addition protect persistent information while correspondence between Clinician side (Client side) also, the Server side. A privacy preserving protocol for clinical decision support network is proposed so that patients information dependably stay scrambled amid diagnose prepare by looking after the accuracy. To enhance the precision of Decision Support System for various malady C5.0 classifiers and to save security, a Homomorphism encryption algorithm Paillier cryptosystem is being utilized.

Keywords: classification, homomorphic encryption, clinical decision support, privacy

Procedia PDF Downloads 324
23256 Framework to Quantify Customer Experience

Authors: Anant Sharma, Ashwin Rajan

Abstract:

Customer experience is measured today based on defining a set of metrics and KPIs, setting up thresholds and defining triggers across those thresholds. While this is an effective way of measuring against a Key Performance Indicator ( referred to as KPI in the rest of the paper ), this approach cannot capture the various nuances that make up the overall customer experience. Customers consume a product or service at various levels, which is not reflected in metrics like Customer Satisfaction or Net Promoter Score, but also across other measurements like recurring revenue, frequency of service usage, e-learning and depth of usage. Here we explore an alternative method of measuring customer experience by flipping the traditional views. Rather than rolling customers up to a metric, we roll up metrics to hierarchies and then measure customer experience. This method allows any team to quantify customer experience across multiple touchpoints in a customer’s journey. We make use of various data sources which contain information for metrics like CXSAT, NPS, Renewals, and depths of service usage collected across a customer lifecycle. This data can be mined systematically to get linkages between different data points like geographies, business groups, products and time. Additional views can be generated by blending synthetic contexts into the data to show trends and top/bottom types of reports. We have created a framework that allows us to measure customer experience using the above logic.

Keywords: analytics, customers experience, BI, business operations, KPIs, metrics

Procedia PDF Downloads 62
23255 Modelling the Indonesian Goverment Securities Yield Curve Using Nelson-Siegel-Svensson and Support Vector Regression

Authors: Jamilatuzzahro, Rezzy Eko Caraka

Abstract:

The yield curve is the plot of the yield to maturity of zero-coupon bonds against maturity. In practice, the yield curve is not observed but must be extracted from observed bond prices for a set of (usually) incomplete maturities. There exist many methodologies and theory to analyze of yield curve. We use two methods (the Nelson-Siegel Method, the Svensson Method, and the SVR method) in order to construct and compare our zero-coupon yield curves. The objectives of this research were: (i) to study the adequacy of NSS model and SVR to Indonesian government bonds data, (ii) to choose the best optimization or estimation method for NSS model and SVR. To obtain that objective, this research was done by the following steps: data preparation, cleaning or filtering data, modeling, and model evaluation.

Keywords: support vector regression, Nelson-Siegel-Svensson, yield curve, Indonesian government

Procedia PDF Downloads 233
23254 Influencers of E-Learning Readiness among Palestinian Secondary School Teachers: An Explorative Study

Authors: Fuad A. A. Trayek, Tunku Badariah Tunku Ahmad, Mohamad Sahari Nordin, Mohammed AM Dwikat

Abstract:

This paper reports on the results of an exploratory factor analysis procedure applied on the e-learning readiness data obtained from a survey of four hundred and seventy-nine (N = 479) teachers from secondary schools in Nablus, Palestine. The data were drawn from a 23-item Likert questionnaire measuring e-learning readiness based on Chapnick's conception of the construct. Principal axis factoring (PAF) with Promax rotation applied on the data extracted four distinct factors supporting four of Chapnick's e-learning readiness dimensions, namely technological readiness, psychological readiness, infrastructure readiness and equipment readiness. Together these four dimensions explained 56% of the variance. These findings provide further support for the construct validity of the items and for the existence of these four factors that measure e-learning readiness.

Keywords: e-learning, e-learning readiness, technological readiness, psychological readiness, principal axis factoring

Procedia PDF Downloads 389
23253 Analysis of Noodle Production Process at Yan Hu Food Manufacturing: Basis for Production Improvement

Authors: Rhadinia Tayag-Relanes, Felina C. Young

Abstract:

This study was conducted to analyze the noodle production process at Yan Hu Food Manufacturing for the basis of production improvement. The study utilized the PDCA approach and record review in the gathering of data for the calendar year 2019 from August to October data of the noodle products miki, canton, and misua. Causal-comparative research was used in this study; it attempts to establish cause-effect relationships among the variables such as descriptive statistics and correlation, both were used to compute the data gathered. The study found that miki, canton, and misua production has different cycle time sets for each production and has different production outputs in every set of its production process and a different number of wastages. The company has not yet established its allowable rejection rate/ wastage; instead, this paper used a 1% wastage limit. The researcher recommended the following: machines used for each process of the noodle product must be consistently maintained and monitored; an assessment of all the production operators by checking their performance statistically based on the output and the machine performance; a root cause analysis for finding the solution must be conducted; and an improvement on the recording system of the input and output of the production process of noodle product should be established to eliminate the poor recording of data.

Keywords: production, continuous improvement, process, operations, PDCA

Procedia PDF Downloads 52
23252 The Culex Pipiens Niche: Assessment with Climatic and Physiographic Variables via a Geographic Information System

Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, João Casaca

Abstract:

Using a geographic information system (GIS), the relations between a georeferenced data set of Culex pipiens sl. mosquitoes collected in Portugal mainland during seven years (2006-2012) and meteorological and physiographic parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures), daily total rainfall, altitude, land use/land cover and proximity to water bodies are evaluated. Focus is on the mosquito females; the characterization of its habitat is the key for the planning of chirurgical non-aggressive prophylactic countermeasures to avoid ambient degradation. The GIS allow for the spatial determination of the zones were the mosquito mean captures has been above average; using the meteorological values at these coordinates, the limits of each parameter are identified/computed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the thresholds obtained for each parameter. The intersection of the maps obtained for each month show the evolution of the area favorable to the species through the mosquito season, which is from May to October at these latitudes. In parallel, mean and above average captures were related to the physiographic parameters. Three levels of risk could be identified for each parameter, using above average captures as an index. The results were applied to the suitability meteorological maps of each month. The Culex pipiens critical niche is delimited, reflecting the critical areas and the level of risk for transmission of the pathogens to which they are competent vectors (West Nile virus, iridoviruses, rheoviruses and parvoviruses).

Keywords: Culex pipiens, ecological niche, risk assessment, risk management

Procedia PDF Downloads 529
23251 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications

Authors: Jongbae Lee, Seongsoo Lee

Abstract:

Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.

Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL

Procedia PDF Downloads 288
23250 Teaching Translation during Covid-19 Outbreak: Challenges and Discoveries

Authors: Rafat Alwazna

Abstract:

Translation teaching is a particular activity that includes translators and interpreters training either inside or outside institutionalised settings, such as universities. It can also serve as a means of teaching other fields, such as foreign languages. Translation teaching began in the twentieth century. Teachers of translation hold the responsibilities of educating students, developing their translation competence and training them to be professional translators. The activity of translation teaching involves various tasks, including curriculum design, course delivery, material writing as well as application and implementation. The present paper addresses translation teaching during COVID-19 outbreak, seeking to find out the challenges encountered by translation teachers in online translation teaching and the discoveries/solutions arrived at to resolve them. The paper makes use of a comprehensive questionnaire, containing closed-ended and open-ended questions to elicit both quantitative as well as qualitative data from about sixty translation teachers who have been teaching translation at BA and MA levels during COVID-19 outbreak. The data shows that about 40% of the participants evaluate their online translation teaching experience during COVID-19 outbreak as enjoyable and exhilarating. On the contrary, no participant has evaluated his/her online translation teaching experience as being not good, nor has any participant evaluated his/her online translation teaching experience as being terrible. The data also presents that about 23.33% of the participants evaluate their online translation teaching experience as very good, and the same percentage applies to those who evaluate their online translation teaching experience as good to some extent. Moreover, the data indicates that around 13.33% of the participants evaluate their online translation teaching experience as good. The data also demonstrates that the majority of the participants have encountered obstacles in online translation teaching and have concurrently proposed solutions to resolve them.

Keywords: online translation teaching, electronic learning platform, COVID-19 outbreak, challenges, solutions

Procedia PDF Downloads 214
23249 Load Forecasting Using Neural Network Integrated with Economic Dispatch Problem

Authors: Mariyam Arif, Ye Liu, Israr Ul Haq, Ahsan Ashfaq

Abstract:

High cost of fossil fuels and intensifying installations of alternate energy generation sources are intimidating main challenges in power systems. Making accurate load forecasting an important and challenging task for optimal energy planning and management at both distribution and generation side. There are many techniques to forecast load but each technique comes with its own limitation and requires data to accurately predict the forecast load. Artificial Neural Network (ANN) is one such technique to efficiently forecast the load. Comparison between two different ranges of input datasets has been applied to dynamic ANN technique using MATLAB Neural Network Toolbox. It has been observed that selection of input data on training of a network has significant effects on forecasted results. Day-wise input data forecasted the load accurately as compared to year-wise input data. The forecasted load is then distributed among the six generators by using the linear programming to get the optimal point of generation. The algorithm is then verified by comparing the results of each generator with their respective generation limits.

Keywords: artificial neural networks, demand-side management, economic dispatch, linear programming, power generation dispatch

Procedia PDF Downloads 177
23248 Problems and Challenges in Social Economic Research after COVID-19: The Case Study of Province Sindh

Authors: Waleed Baloch

Abstract:

This paper investigates the problems and challenges in social-economic research in the case study of the province of Sindh after the COVID-19 pandemic; the pandemic has significantly impacted various aspects of society and the economy, necessitating a thorough examination of the resulting implications. The study also investigates potential strategies and solutions to mitigate these challenges, ensuring the continuation of robust social and economic research in the region. Through an in-depth analysis of data and interviews with key stakeholders, the study reveals several significant findings. Firstly, researchers encountered difficulties in accessing primary data due to disruptions caused by the pandemic, leading to limitations in the scope and accuracy of their studies. Secondly, the study highlights the challenges faced in conducting fieldwork, such as restrictions on travel and face-to-face interactions, which impacted the ability to gather reliable data. Lastly, the research identifies the need for innovative research methodologies and digital tools to adapt to the new research landscape brought about by the pandemic. The study concludes by proposing recommendations to address these challenges, including utilizing remote data collection methods, leveraging digital technologies for data analysis, and establishing collaborations among researchers to overcome resource constraints. By addressing these issues, researchers in the social economic field can effectively navigate the post-COVID-19 research landscape, facilitating a deeper understanding of the socioeconomic impacts and facilitating evidence-based policy interventions.

Keywords: social economic, sociology, developing economies, COVID-19

Procedia PDF Downloads 53
23247 Smart Meter Incorporating UWB Technology

Authors: T. A. Khan, A. B. Khan, M. Babar, T. A. Taj, Imran Ijaz Imran

Abstract:

Smart Meter is a key element in the evolving concept of Smart Grid, which plays an important role in interaction between the consumer and the supplier. In general, the smart meter is an intelligent digital energy meter that measures the consumption of electrical energy and provides other additional services as compared to the conventional energy meters. One of the important element that makes a meter smart and different is its communication module. Smart meters usually have two way and real-time communication between the consumer and the supplier through which its transfer data and information. In this paper, Ultra Wide Band (UWB) is recommended as communication platform because of its high data-rate and presents the physical layer, which could be easily incorporated in existing Smart Meters. The physical layer is simulated in MATLAB Simulink and the results are provided.

Keywords: Ultra Wide Band (UWB), Smart Meter, MATLAB, transfer data

Procedia PDF Downloads 504
23246 Qualitative Approaches to Mindfulness Meditation Practices in Higher Education

Authors: Patrizia Barroero, Saliha Yagoubi

Abstract:

Mindfulness meditation practices in the context of higher education are becoming more and more common. Some of the reported benefits of mediation interventions and workshops include: improved focus, general well-being, diminished stress, and even increased resilience and grit. A series of workshops free to students, faculty, and staff was offered twice a week over two semesters at Hudson County Community College, New Jersey. The results of an exploratory study based on participants’ subjective reactions to these workshops will be presented. A qualitative approach was used to collect and analyze the data and a hermeneutic phenomenological perspective served as a framework for the research design and data collection and analysis. The data collected includes three recorded videos of semi-structured interviews and several written surveys submitted by volunteer participants.

Keywords: mindfulness meditation practices, stress reduction, resilience, grit, higher education success, qualitative research

Procedia PDF Downloads 64
23245 Integrated Nested Laplace Approximations For Quantile Regression

Authors: Kajingulu Malandala, Ranganai Edmore

Abstract:

The asymmetric Laplace distribution (ADL) is commonly used as the likelihood function of the Bayesian quantile regression, and it offers different families of likelihood method for quantile regression. Notwithstanding their popularity and practicality, ADL is not smooth and thus making it difficult to maximize its likelihood. Furthermore, Bayesian inference is time consuming and the selection of likelihood may mislead the inference, as the Bayes theorem does not automatically establish the posterior inference. Furthermore, ADL does not account for greater skewness and Kurtosis. This paper develops a new aspect of quantile regression approach for count data based on inverse of the cumulative density function of the Poisson, binomial and Delaporte distributions using the integrated nested Laplace Approximations. Our result validates the benefit of using the integrated nested Laplace Approximations and support the approach for count data.

Keywords: quantile regression, Delaporte distribution, count data, integrated nested Laplace approximation

Procedia PDF Downloads 153
23244 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health

Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik

Abstract:

Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.

Keywords: ecology, morbidity, population, lag time

Procedia PDF Downloads 70
23243 Measuring Flood Risk concerning with the Flood Protection Embankment in Big Flooding Events of Dhaka Metropolitan Zone

Authors: Marju Ben Sayed, Shigeko Haruyama

Abstract:

Among all kinds of natural disaster, the flood is a common feature in rapidly urbanizing Dhaka city. In this research, assessment of flood risk of Dhaka metropolitan area has been investigated by using an integrated approach of GIS, remote sensing and socio-economic data. The purpose of the study is to measure the flooding risk concerning with the flood protection embankment in big flooding events (1988, 1998 and 2004) and urbanization of Dhaka metropolitan zone. In this research, we considered the Dhaka city into two parts; East Dhaka (outside the flood protection embankment) and West Dhaka (inside the flood protection embankment). Using statistical data, we explored the socio-economic status of the study area population by comparing the density of population, land price and income level. We have drawn the cross section profile of the flood protection embankment into three different points for realizing the flooding risk in the study area, especially in the big flooding year (1988, 1998 and 2004). According to the physical condition of the study area, the land use/land cover map has been classified into five classes. Comparing with each land cover unit, historical weather station data and the socio-economic data, the flooding risk has been evaluated. Moreover, we compared between DEM data and each land cover units to find out the relationship with flood. It is expected that, this study could contribute to effective flood forecasting, relief and emergency management for a future flood event in Dhaka city.

Keywords: land use, land cover change, socio-economic, Dhaka city, GIS, flood

Procedia PDF Downloads 284
23242 Iterative Method for Lung Tumor Localization in 4D CT

Authors: Sarah K. Hagi, Majdi Alnowaimi

Abstract:

In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.

Keywords: automated algorithm , computed tomography, lung tumor, tumor localization

Procedia PDF Downloads 594
23241 Incorporating Anomaly Detection in a Digital Twin Scenario Using Symbolic Regression

Authors: Manuel Alves, Angelica Reis, Armindo Lobo, Valdemar Leiras

Abstract:

In industry 4.0, it is common to have a lot of sensor data. In this deluge of data, hints of possible problems are difficult to spot. The digital twin concept aims to help answer this problem, but it is mainly used as a monitoring tool to handle the visualisation of data. Failure detection is of paramount importance in any industry, and it consumes a lot of resources. Any improvement in this regard is of tangible value to the organisation. The aim of this paper is to add the ability to forecast test failures, curtailing detection times. To achieve this, several anomaly detection algorithms were compared with a symbolic regression approach. To this end, Isolation Forest, One-Class SVM and an auto-encoder have been explored. For the symbolic regression PySR library was used. The first results show that this approach is valid and can be added to the tools available in this context as a low resource anomaly detection method since, after training, the only requirement is the calculation of a polynomial, a useful feature in the digital twin context.

Keywords: anomaly detection, digital twin, industry 4.0, symbolic regression

Procedia PDF Downloads 108
23240 Re-Constructing the Research Design: Dealing with Problems and Re-Establishing the Method in User-Centered Research

Authors: Kerem Rızvanoğlu, Serhat Güney, Emre Kızılkaya, Betül Aydoğan, Ayşegül Boyalı, Onurcan Güden

Abstract:

This study addresses the re-construction and implementation process of the methodological framework developed to evaluate how locative media applications accompany the urban experiences of international students coming to Istanbul with exchange programs in 2022. The research design was built on a three-stage model. The research team conducted a qualitative questionnaire in the first stage to gain exploratory data. These data were then used to form three persona groups representing the sample by applying cluster analysis. In the second phase, a semi-structured digital diary study was carried out on a gamified task list with a sample selected from the persona groups. This stage proved to be the most difficult to obtaining valid data from the participant group. The research team re-evaluated the design of this second phase to reach the participants who will perform the tasks given by the research team while sharing their momentary city experiences, to ensure the daily data flow for two weeks, and to increase the quality of the obtained data. The final stage, which follows to elaborate on the findings, is the “Walk & Talk,” which is completed with face-to-face and in-depth interviews. It has been seen that the multiple methods used in the research process contribute to the depth and data diversity of the research conducted in the context of urban experience and locative technologies. In addition, by adapting the research design to the experiences of the users included in the sample, the differences and similarities between the initial research design and the research applied are shown.

Keywords: digital diary study, gamification, multi-model research, persona analysis, research design for urban experience, user-centered research, “Walk & Talk”

Procedia PDF Downloads 162