Search results for: exponential interpolation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 535

Search results for: exponential interpolation

145 A Computationally Intelligent Framework to Support Youth Mental Health in Australia

Authors: Nathaniel Carpenter

Abstract:

Web-enabled systems for supporting youth mental health management in Australia are pioneering in their field; however, with their success, these systems are experiencing exponential growth in demand which is straining an already stretched service. Supporting youth mental is critical as the lack of support is associated with significant and lasting negative consequences. To meet this growing demand, and provide critical support, investigations are needed on evaluating and improving existing online support services. Improvements should focus on developing frameworks capable of augmenting and scaling service provisions. There are few investigations informing best-practice frameworks when implementing e-mental health support systems for youth mental health; there are fewer which implement machine learning or artificially intelligent systems to facilitate the delivering of services. This investigation will use a case study methodology to highlight the design features which are important for systems to enable young people to self-manage their mental health. The investigation will also highlight the current information system challenges, to include challenges associated with service quality, provisioning, and scaling. This work will propose methods of meeting these challenges through improved design, service augmentation and automation, service quality, and through artificially intelligent inspired solutions. The results of this study will inform a framework for supporting youth mental health with intelligent and scalable web-enabled technologies to support an ever-growing user base.

Keywords: artificial intelligence, information systems, machine learning, youth mental health

Procedia PDF Downloads 87
144 Application of Interferometric Techniques for Quality Control Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: food industry, interferometric, oils, quality control

Procedia PDF Downloads 348
143 Traditionalism and Modernity in Seoul’s Urban Planning for the Disabled

Authors: Helena Park

Abstract:

For the last three decades, Seoul has experienced an exponential increase in population and concomitant rapid urbanization. With such development, Korea adopted a predominantly Western style of architecture but still based the structures on Korea’s traditionalism and Confucian precepts of pung su (feng shui). While Korean urban planning is focusing on balancing out the modernism and traditionalism in its city architecture, particularly in and landmark sites like The Seoul N Tower and Gyeongbok Palace, the accessibility and convenience concerns of minorities in social groups like the disabled are habitually disregarded. With the implementations of ramps and elevators, the welfare of all citizens seemed to improve. According to the dictates of traditional Korean culture, it was crucial for those construed as “disabled” or “underprivileged” to feel natural in the city of Seoul, which is planned and built with the background aesthetic theory of being harmonized with nature. It was interesting and also alarming to see the extent to which Korean landmarks were lacking facilities for the disabled throughout the city. Standards set by the Ministry of Health and Welfare and the Seoul Metropolitan City insist that buildings accommodate the needs of the disabled as well as the non-disabled equally, but it was hard to find buildings in Seoul - old or new - that fulfilled all the requirements. If fulfilled, some of the facilities were hard to find or not well maintained. There is thus a serious concern for planning reform in connection with Seoul’s 2030 Urban Plan. This paper argues that alternative planning could better integrate Korea’s traditionalist architecture and concepts of pung su rather than insist on the necessity of Western-style modernism as the sole modality for achieving accessibility for the disabled in Korea.

Keywords: accessibility, architecture of Seoul , Pung Su (Feng Shui), traditionalism, modernism in Seoul

Procedia PDF Downloads 202
142 Impure Water, a Future Disaster: A Case Study of Lahore Ground Water Quality with GIS Techniques

Authors: Rana Waqar Aslam, Urooj Saeed, Hammad Mehmood, Hameed Ullah, Imtiaz Younas

Abstract:

This research has been conducted to assess the water quality in and around Lahore Metropolitan area on the basis of three different land uses, i.e. residential, commercial, and industrial land uses. For this, 29 sample sites have been selected on the basis of simple random sampling technique. Samples were collected at the source (WASA tube wells). The criteria for selecting sample sites are to have a maximum concentration of population in the selected land uses. The results showed that in the residential land use the proportion of nitrate and turbidity is at their highest level in the areas of Allama Iqbal Town and Samanabad Town. Commercial land use of Gulberg and Data Gunj Bakhsh Town have highest level of proportion of chlorides, calcium, TDS, pH, Mg, total hardness, arsenic and alkalinity. Whereas in industrial type of land use in Ravi and Wahga Town have the proportion of arsenic, Mg, nitrate, pH, and turbidity are at their highest level. The high rate of concentration of these parameters in these areas is basically due to the old and fractured pipelines that allow bacterial as well as physiochemical contaminants to contaminate the portable water at the sources. Furthermore, it is seen in most areas that waste water from domestic, industrial, as well as municipal sources may get easy discharge into open spaces and water bodies, like, cannels, rivers, lakes that seeps and become a part of ground water. In addition, huge dumps located in Lahore are becoming the cause of ground water contamination as when the rain falls, the water gets seep into the ground and impures the ground water quality. On the basis of the derived results with the help of Geo-spatial technology ACRGIS 9.3 Interpolation (IDW), it is recommended that water filtration plants must be installed with specific parameter control. A separate team for proper inspection has to be made for water quality check at the source. Old water pipelines must be replaced with the new pipelines, and safe water depth must be ensured at the source end.

Keywords: GIS, remote sensing, pH, nitrate, disaster, IDW

Procedia PDF Downloads 198
141 3D Codes for Unsteady Interaction Problems of Continuous Mechanics in Euler Variables

Authors: M. Abuziarov

Abstract:

The designed complex is intended for the numerical simulation of fast dynamic processes of interaction of heterogeneous environments susceptible to the significant formability. The main challenges in solving such problems are associated with the construction of the numerical meshes. Currently, there are two basic approaches to solve this problem. One is using of Lagrangian or Lagrangian Eulerian grid associated with the boundaries of media and the second is associated with the fixed Eulerian mesh, boundary cells of which cut boundaries of the environment medium and requires the calculation of these cut volumes. Both approaches require the complex grid generators and significant time for preparing the code’s data for simulation. In this codes these problems are solved using two grids, regular fixed and mobile local Euler Lagrange - Eulerian (ALE approach) accompanying the contact and free boundaries, the surfaces of shock waves and phase transitions, and other possible features of solutions, with mutual interpolation of integrated parameters. For modeling of both liquids and gases, and deformable solids the Godunov scheme of increased accuracy is used in Lagrangian - Eulerian variables, the same for the Euler equations and for the Euler- Cauchy, describing the deformation of the solid. The increased accuracy of the scheme is achieved by using 3D spatial time dependent solution of the discontinuity problem (3D space time dependent Riemann's Problem solver). The same solution is used to calculate the interaction at the liquid-solid surface (Fluid Structure Interaction problem). The codes does not require complex 3D mesh generators, only the surfaces of the calculating objects as the STL files created by means of engineering graphics are given by the user, which greatly simplifies the preparing the task and makes it convenient to use directly by the designer at the design stage. The results of the test solutions and applications related to the generation and extension of the detonation and shock waves, loading the constructions are presented.

Keywords: fluid structure interaction, Riemann's solver, Euler variables, 3D codes

Procedia PDF Downloads 413
140 Sesame (Sesamum Indicum L.): Molecular Breeding and Transformation

Authors: Micheale Yifter Weldemichael, Stefaan Werbrouck, Hailay Mehari Gebremedhn

Abstract:

Sesame (Sesamum indicum L.) is among the most important oilseed crops for its high edible oil quality and quantity. Sesame is grown for food, medicinal, pharmaceutical, and industrial uses. Sesame is also cultivated as a main cash crop in Asia and Africa by smallholder farmers. Despite the global exponential increase in sesame cultivation area, its production and productivity remain low, mainly due to biotic and abiotic constraints. Notwithstanding the efforts to solve these problems, a low level of genetic variation and inadequate genomic resources hinder the progress of sesame improvement. The objective of this paper is, therefore, to review recent advances in the area of molecular breeding and transformation to overcome major production constraints and could result in enhanced and sustained sesame production. This paper reviews various researches conducted to date on molecular breeding and genetic transformation in sesame focusing on molecular markers used in assessing the available online database resources, genes responsible for key agronomic traits as well as transgenic technology and genome editing. The review concentrates on quantitative and semi-quantitative studies on molecular breeding for key agronomic traits such as improvement of yield components, oil and oil-related traits, disease and insect/pest resistance, and drought, waterlogging and salt tolerance, as well as sesame genetic transformation and genome editing techniques. Pitfalls and limitations of existing studies and methodologies used so far are identified and some priorities for future research directions in sesame genetic improvement are identified in this review.

Keywords: molecular breeding, oil, sesame, shattering

Procedia PDF Downloads 41
139 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient

Procedia PDF Downloads 113
138 The Bayesian Premium Under Entropy Loss

Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita

Abstract:

Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.

Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation

Procedia PDF Downloads 298
137 Inviscid Steady Flow Simulation Around a Wing Configuration Using MB_CNS

Authors: Muhammad Umar Kiani, Muhammad Shahbaz, Hassan Akbar

Abstract:

Simulation of a high speed inviscid steady ideal air flow around a 2D/axial-symmetry body was carried out by the use of mb_cns code. mb_cns is a program for the time-integration of the Navier-Stokes equations for two-dimensional compressible flows on a multiple-block structured mesh. The flow geometry may be either planar or axisymmetric and multiply-connected domains can be modeled by patching together several blocks. The main simulation code is accompanied by a set of pre and post-processing programs. The pre-processing programs scriptit and mb_prep start with a short script describing the geometry, initial flow state and boundary conditions and produce a discretized version of the initial flow state. The main flow simulation program (or solver as it is sometimes called) is mb_cns. It takes the files prepared by scriptit and mb_prep, integrates the discrete form of the gas flow equations in time and writes the evolved flow data to a set of output files. This output data may consist of the flow state (over the whole domain) at a number of instants in time. After integration in time, the post-processing programs mb_post and mb_cont can be used to reformat the flow state data and produce GIF or postscript plots of flow quantities such as pressure, temperature and Mach number. The current problem is an example of supersonic inviscid flow. The flow domain for the current problem (strake configuration wing) is discretized by a structured grid and a finite-volume approach is used to discretize the conservation equations. The flow field is recorded as cell-average values at cell centers and explicit time stepping is used to update conserved quantities. MUSCL-type interpolation and one of three flux calculation methods (Riemann solver, AUSMDV flux splitting and the Equilibrium Flux Method, EFM) are used to calculate inviscid fluxes across cell faces.

Keywords: steady flow simulation, processing programs, simulation code, inviscid flux

Procedia PDF Downloads 402
136 Cars in a Neighborhood: A Case of Sustainable Living in Sector 22 Chandigarh

Authors: Maninder Singh

Abstract:

The Chandigarh city is under the strain of exponential growth of car density across various neighborhood. The consumerist nature of society today is to be blamed for this menace because everyone wants to own and ride a car. Car manufacturers are busy selling two or more cars per household. The Regional Transport Offices are busy issuing as many licenses to new vehicles as they can in order to generate revenue in the form of Road Tax. The car traffic in the neighborhoods of Chandigarh has reached a tipping point. There needs to be a more empirical and sustainable model of cars per household, which should be based on specific parameters of livable neighborhoods. Sector 22 in Chandigarh is one of the first residential sectors to be established in the city. There is scope to think, reflect, and work out a method to know how many cars we need to sell our citizens before we lose the argument to traffic problems, parking problems, and road rage. This is where the true challenge of a planner or a designer of the city lies. Currently, in Chandigarh city, there are no clear visible answers to this problem. The way forward is to look at spatial mapping, planning, and design of car parking units to address the problem, rather than suggesting extreme measures of banning cars (short-term) or promoting plans for citywide transport (very long-term). This is a chance to resolve the problem with a pragmatic approach from a citizen’s perspective, instead of an orthodox development planner’s methodology. Since citizens are at the center of how the problem is to be addressed, acceptable solutions are more likely to emerge from the car and traffic problem as defined by the citizens. Thus, the idea and its implementation would be interesting in comparison to the known academic methodologies. The novel and innovative process would lead to a more acceptable and sustainable approach to the issue of number of car parks in the neighborhood of Chandigarh city.

Keywords: cars, Chandigarh, neighborhood, sustainable living, walkability

Procedia PDF Downloads 117
135 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda

Authors: Rutebuka Evariste, Zhang Lixiao

Abstract:

Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.

Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics

Procedia PDF Downloads 318
134 Critically Sampled Hybrid Trigonometry Generalized Discrete Fourier Transform for Multistandard Receiver Platform

Authors: Temidayo Otunniyi

Abstract:

This paper presents a low computational channelization algorithm for the multi-standards platform using poly phase implementation of a critically sampled hybrid Trigonometry generalized Discrete Fourier Transform, (HGDFT). An HGDFT channelization algorithm exploits the orthogonality of two trigonometry Fourier functions, together with the properties of Quadrature Mirror Filter Bank (QMFB) and Exponential Modulated filter Bank (EMFB), respectively. HGDFT shows improvement in its implementation in terms of high reconfigurability, lower filter length, parallelism, and medium computational activities. Type 1 and type 111 poly phase structures are derived for real-valued HGDFT modulation. The design specifications are decimated critically and over-sampled for both single and multi standards receiver platforms. Evaluating the performance of oversampled single standard receiver channels, the HGDFT algorithm achieved 40% complexity reduction, compared to 34% and 38% reduction in the Discrete Fourier Transform (DFT) and tree quadrature mirror filter (TQMF) algorithm. The parallel generalized discrete Fourier transform (PGDFT) and recombined generalized discrete Fourier transform (RGDFT) had 41% complexity reduction and HGDFT had a 46% reduction in oversampling multi-standards mode. While in the critically sampled multi-standard receiver channels, HGDFT had complexity reduction of 70% while both PGDFT and RGDFT had a 34% reduction.

Keywords: software defined radio, channelization, critical sample rate, over-sample rate

Procedia PDF Downloads 95
133 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 166
132 DNA Damage and Apoptosis Induced in Drosophila melanogaster Exposed to Different Duration of 2400 MHz Radio Frequency-Electromagnetic Fields Radiation

Authors: Neha Singh, Anuj Ranjan, Tanu Jindal

Abstract:

Over the last decade, the exponential growth of mobile communication has been accompanied by a parallel increase in density of electromagnetic fields (EMF). The continued expansion of mobile phone usage raises important questions as EMF, especially radio frequency (RF), have long been suspected of having biological effects. In the present experiments, we studied the effects of RF-EMF on cell death (apoptosis) and DNA damage of a well- tested biological model, Drosophila melanogaster exposed to 2400 MHz frequency for different time duration i.e. 2 hrs, 4 hrs, 6 hrs,8 hrs, 10 hrs, and 12 hrs each day for five continuous days in ambient temperature and humidity conditions inside an exposure chamber. The flies were grouped into control, sham-exposed, and exposed with 100 flies in each group. In this study, well-known techniques like Comet Assay and TUNEL (Terminal deoxynucleotide transferase dUTP Nick End Labeling) Assay were used to detect DNA damage and for apoptosis studies, respectively. Experiments results showed DNA damage in the brain cells of Drosophila which increases as the duration of exposure increases when observed under the observed when we compared results of control, sham-exposed, and exposed group which indicates that EMF radiation-induced stress in the organism that leads to DNA damage and cell death. The process of apoptosis and mutation follows similar pathway for all eukaryotic cells; therefore, studying apoptosis and genotoxicity in Drosophila makes similar relevance for human beings as well.

Keywords: cell death, apoptosis, Comet Assay, DNA damage, Drosophila, electromagnetic fields, EMF, radio frequency, RF, TUNEL assay

Procedia PDF Downloads 132
131 Time-Dependent Reliability Analysis of Corrosion Affected Cast Iron Pipes with Mixed Mode Fracture

Authors: Chun-Qing Li, Guoyang Fu, Wei Yang

Abstract:

A significant portion of current water networks is made of cast iron pipes. Due to aging and deterioration with corrosion being the most predominant mechanism, the failure rate of cast iron pipes is very high. Although considerable research has been carried out in the past few decades, most are on the effect of corrosion on the structural capacity of pipes using strength theory as the failure criterion. This paper presents a reliability-based methodology for the assessment of corrosion affected cast iron pipe cracking failures. A nonlinear limit state function taking into account all three fracture modes is proposed for brittle metal pipes with mixed mode fracture. A stochastic model of the load effect is developed, and time-dependent reliability method is employed to quantify the probability of failure and predict the remaining service life. A case study is carried out using the proposed methodology, followed by sensitivity analysis to investigate the effects of the random variables on the probability of failure. It has been found that the larger the inclination angle or the Mode I fracture toughness is, the smaller the probability of pipe failure is. It has also been found that the multiplying and exponential coefficients k and n in the power law corrosion model and the internal pressure have the most influence on the probability of failure for cast iron pipes. The methodology presented in this paper can assist pipe engineers and asset managers in developing a risk-informed and cost-effective strategy for better management of corrosion-affected pipelines.

Keywords: corrosion, inclined surface cracks, pressurized cast iron pipes, stress intensity

Procedia PDF Downloads 285
130 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 341
129 Vibration Based Damage Detection and Stiffness Reduction of Bridges: Experimental Study on a Small Scale Concrete Bridge

Authors: Mirco Tarozzi, Giacomo Pignagnoli, Andrea Benedetti

Abstract:

Structural systems are often subjected to degradation processes due to different kind of phenomena like unexpected loadings, ageing of the materials and fatigue cycles. This is true especially for bridges, in which their safety evaluation is crucial for the purpose of a design of planning maintenance. This paper discusses the experimental evaluation of the stiffness reduction from frequency changes due to uniform damage scenario. For this purpose, a 1:4 scaled bridge has been built in the laboratory of the University of Bologna. It is made of concrete and its cross section is composed by a slab linked to four beams. This concrete deck is 6 m long and 3 m wide, and its natural frequencies have been identified dynamically by exciting it with an impact hammer, a dropping weight, or by walking on it randomly. After that, a set of loading cycles has been applied to this bridge in order to produce a uniformly distributed crack pattern. During the loading phase, either cracking moment and yielding moment has been reached. In order to define the relationship between frequency variation and loss in stiffness, the identification of the natural frequencies of the bridge has been performed, before and after the occurrence of the damage, corresponding to each load step. The behavior of breathing cracks and its effect on the natural frequencies has been taken into account in the analytical calculations. By using a sort of exponential function given from the study of lot of experimental tests in the literature, it has been possible to predict the stiffness reduction through the frequency variation measurements. During the load test also crack opening and middle span vertical displacement has been monitored.

Keywords: concrete bridge, damage detection, dynamic test, frequency shifts, operational modal analysis

Procedia PDF Downloads 158
128 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options

Authors: Wajih Abbassi, Zouhaier Ben Khelifa

Abstract:

The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.

Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options

Procedia PDF Downloads 403
127 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 231
126 Spectroscopic Study of Tb³⁺ Doped Calcium Aluminozincate Phosphor for Display and Solid-State Lighting Applications

Authors: Sumandeep Kaur, Allam Srinivasa Rao, Mula Jayasimhadri

Abstract:

In recent years, rare earth (RE) ions doped inorganic luminescent materials are seeking great attention due to their excellent physical and chemical properties. These materials offer high thermal and chemical stability and exhibit good luminescence properties due to the presence of RE ions. The luminescent properties of these materials are attributed to their intra-configurational f-f transitions in RE ions. A series of Tb³⁺ doped calcium aluminozincate has been synthesized via sol-gel method. The structural and morphological studies have been carried out by recording X-ray diffraction patterns and SEM image. The luminescent spectra have been recorded for a comprehensive study of their luminescence properties. The XRD profile reveals the single-phase orthorhombic crystal structure with an average crystallite size of 65 nm as calculated by using DebyeScherrer equation. The SEM image exhibits completely random, irregular morphology of micron size particles of the prepared samples. The optimization of luminescence has been carried out by varying the dopant Tb³⁺ concentration within the range from 0.5 to 2.0 mol%. The as-synthesized phosphors exhibit intense emission at 544 nm pumped at 478 nm excitation wavelength. The optimized Tb³⁺ concentration has been found to be 1.0 mol% in the present host lattice. The decay curves show bi-exponential fitting for the as-synthesized phosphor. The colorimetric studies show green emission with CIE coordinates (0.334, 0.647) lying in green region for the optimized Tb³⁺ concentration. This report reveals the potential utility of Tb³⁺ doped calcium aluminozincate phosphors for display and solid-state lighting devices.

Keywords: concentration quenching, phosphor, photoluminescence, XRD

Procedia PDF Downloads 120
125 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second –95,3%.

Keywords: bass model, generalized bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States

Procedia PDF Downloads 324
124 Simultaneous Saccharification and Fermentation for D-Lactic Acid Production from Dried Distillers Grains with Solubles

Authors: Nurul Aqilah Mohd Zaini, Afroditi Chatzifragkou, Dimitris Charalampopoulos

Abstract:

D-Lactic acid production is gaining increasing attention due to the thermostable properties of its polymer, Polylactic Acid (PLA). In this study, D-lactic acid was produced in microbial cultures using Lactobacillus coryniformis subsp. torquens as D-lactic acid producer and hydrolysates of Dried Distillers Grains with Solubles (DDGS) as fermentation substrate. Prior to fermentation, DDGS was first alkaline pretreated with 5% (w/v) NaOH, for 15 minutes (121oC/ ~16 psi). This led to the generation of DDGS solid residues, rich in carbohydrates and especially cellulose (~52%). The carbohydrate-rich solids were then subjected to enzymatic hydrolysis with Accellerase® 1500. For Separate Hydrolysis and Fermentation (SHF), enzymatic hydrolysis was carried out at 50oC for 24 hours, followed by fermentation of D-lactic acid at 37oC in controlled pH 6. The obtained hydrolysate contained 24 g/l glucose, 5.4 g/l xylose and 0.6 g/l arabinose. In the case of Simultaneous Saccharification and Fermentation (SSF), hydrolysis and fermentation were conducted in a single step process at 37oC in pH 5. The enzymatic hydrolysis of DGGS pretreated solids took place mostly during lag phase of L. coryniformis fermentation, with only a small amount of glucose consumed during the first 6 h. When exponential phase was started, glucose generation reduced as the microorganism started to consume glucose for D-lactic acid production. Higher concentrations of D-lactic acid were produced when SSF approach was applied, with 28 g/l D-lactic acid after 24 h of fermentation (84.5% yield). In contrast, 21.2 g/l D-lactic acid were produced when SHF was used. The optical pu rity of D-lactic acid produced from both experiments was 99.9%. Besides, approximately 2 g/l acetic acid was also generated due to lactic acid degradation after glucose depletion in SHF. SSF was proved an efficient towards DDGS ulilisation and D-lactic acid production, by reducing the overall processing time, yielding sufficient D-lactic acid concentrations without the generation of fermentation by-products.

Keywords: DDGS, alkaline pretreatment, SSF, D-lactic acid

Procedia PDF Downloads 310
123 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures

Authors: Michał Lidner, Zbigniew SzcześNiak

Abstract:

The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.

Keywords: adiabatic process, air shock wave, explosive, finite volume method

Procedia PDF Downloads 162
122 Durability Analysis of a Knuckle Arm Using VPG System

Authors: Geun-Yeon Kim, S. P. Praveen Kumar, Kwon-Hee Lee

Abstract:

A steering knuckle arm is the component that connects the steering system and suspension system. The structural performances such as stiffness, strength, and durability are considered in its design process. The former study suggested the lightweight design of a knuckle arm considering the structural performances and using the metamodel-based optimization. The six shape design variables were defined, and the optimum design was calculated by applying the kriging interpolation method. The finite element method was utilized to predict the structural responses. The suggested knuckle was made of the aluminum Al6082, and its weight was reduced about 60% in comparison with the base steel knuckle, satisfying the design requirements. Then, we investigated its manufacturability by performing foraging analysis. The forging was done as hot process, and the product was made through two-step forging. As a final step of its developing process, the durability is investigated by using the flexible dynamic analysis software, LS-DYNA and the pre and post processor, eta/VPG. Generally, a car make does not provide all the information with the part manufacturer. Thus, the part manufacturer has a limit in predicting the durability performance with the unit of full car. The eta/VPG has the libraries of suspension, tire, and road, which are commonly used parts. That makes a full car modeling. First, the full car is modeled by referencing the following information; Overall Length: 3,595mm, Overall Width: 1,595mm, CVW (Curve Vehicle Weight): 910kg, Front Suspension: MacPherson Strut, Rear Suspension: Torsion Beam Axle, Tire: 235/65R17. Second, the road is selected as the cobblestone. The road condition of the cobblestone is almost 10 times more severe than that of usual paved road. Third, the dynamic finite element analysis using the LS-DYNA is performed to predict the durability performance of the suggested knuckle arm. The life of the suggested knuckle arm is calculated as 350,000km, which satisfies the design requirement set up by the part manufacturer. In this study, the overall design process of a knuckle arm is suggested, and it can be seen that the developed knuckle arm satisfies the design requirement of the durability with the unit of full car. The VPG analysis is successfully performed even though it does not an exact prediction since the full car model is very rough one. Thus, this approach can be used effectively when the detail to full car is not given.

Keywords: knuckle arm, structural optimization, Metamodel, forging, durability, VPG (Virtual Proving Ground)

Procedia PDF Downloads 397
121 The Use of Social Media by Companies Operating on the Polish Market in the Context of the Corporate Reputation Management

Authors: Danuta Szwajca

Abstract:

Reputation The exponential growth of the Internet and social media (SM) in the recent years has contributed to changing the communication environment, in which stakeholders: customers, investors, business partners, employees, like their users, may post and distribute their opinions about the company and its products. This generates a number of potential threats to the image and reputation of both people and organizations. Social media create new opportunities not only for rapid and interactive communication but also for organizing themselves into strong pressure groups which may effectively affect the decisions of various organized bodies. Companies cannot ignore this fact and should use SM not only as an additional communication marketing channel but in a broader context - as a tool to build and protect their reputation. This article aims to identify the extent, scope, and directions of the use of SM in the activities of companies operating in the Polish market, as well as to identify threats and opportunities generated by the media in the area of reputation management. The results of research presented in the article showed that Polish companies recognize the potential of SM and try to apply them in their marketing efforts. However, his activity is limited only to maintain communication with customers through two portals: Facebook and Twitter. In the approach to the SM as a communication channel, the traditional way of thinking dominates, in which they are treated as just another promotional tool used by two departments: marketing and PR. This approach is called "silo" and is not integrated. This way of using SM does not allow effective building and protecting reputation in the Internet environment. To achieve this goal, the following research methods were used: the critical analysis of literature, analysis of secondary sources in a form of the report from the research conducted by Harvard Business Review Poland together with Capgemini Poland and case study.

Keywords: corporate reputation, reputation management, social media, risk reputation

Procedia PDF Downloads 172
120 Demand Forecasting to Reduce Dead Stock and Loss Sales: A Case Study of the Wholesale Electric Equipment and Part Company

Authors: Korpapa Srisamai, Pawee Siriruk

Abstract:

The purpose of this study is to forecast product demands and develop appropriate and adequate procurement plans to meet customer needs and reduce costs. When the product exceeds customer demands or does not move, it requires the company to support insufficient storage spaces. Moreover, some items, when stored for a long period of time, cause deterioration to dead stock. A case study of the wholesale company of electronic equipment and components, which has uncertain customer demands, is considered. The actual purchasing orders of customers are not equal to the forecast provided by the customers. In some cases, customers have higher product demands, resulting in the product being insufficient to meet the customer's needs. However, some customers have lower demands for products than estimates, causing insufficient storage spaces and dead stock. This study aims to reduce the loss of sales opportunities and the number of remaining goods in the warehouse, citing 30 product samples of the company's most popular products. The data were collected during the duration of the study from January to October 2022. The methods used to forecast are simple moving averages, weighted moving average, and exponential smoothing methods. The economic ordering quantity and reorder point are used to calculate to meet customer needs and track results. The research results are very beneficial to the company. The company can reduce the loss of sales opportunities by 20% so that the company has enough products to meet customer needs and can reduce unused products by up to 10% dead stock. This enables the company to order products more accurately, increasing profits and storage space.

Keywords: demand forecast, reorder point, lost sale, dead stock

Procedia PDF Downloads 86
119 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 289
118 Multistep Thermal Degradation Kinetics: Pyrolysis of CaSO₄-Complex Obtained by Antiscaling Effect of Maleic-Anhydride Polymer

Authors: Yousef M. Al-Roomi, Kaneez Fatema Hussain

Abstract:

This work evaluates the thermal degradation kinetic parameters of CaSO₄-complex isolated after the inhibition effect of maleic-anhydride based polymer (YMR-polymers). Pyrolysis experiments were carried out at four heating rates (5, 10, 15 and 20°C/min). Several analytical model-free methods were used to determine the kinetic parameters, including Friedman, Coats and Redfern, Kissinger, Flynn-Wall-Ozawa and Kissinger-Akahira–Sunose methods. The Criado model fitting method based on real mechanism followed in thermal degradation of the complex has been applied to explain the degradation mechanism of CaSO₄-complex. In addition, a simple dynamic model was proposed over two temperature ranges for successive decomposition of CaSO₄-complex which has a combination of organic and inorganic part (adsorbed polymer + CaSO₄.2H₂O scale). The model developed enabled the assessment of pre-exponential factor (A) and apparent activation-energy (Eₐ) for both stages independently using a mathematical developed expression based on an integral solution. The unique reaction mechanism approach applied in this study showed that (Eₐ₁-160.5 kJ/mole) for organic decomposition (adsorbed polymer stage-I) has been lower than Eₐ₂-388 kJ/mole for the CaSO₄ decomposition (inorganic stage-II). Further adsorbed YMR-antiscalant not only reduced the decomposition temperature of CaSO₄-complex compared to CaSO₄-blank (CaSO₄.2H₂O scales in the absence of YMR-polymer) but also distorted the crystal lattice of the organic complex of CaSO₄ precipitates, destroying their compact and regular crystal structures observed from XRD and SEM studies.

Keywords: CaSO₄-complex, maleic-anhydride polymers, thermal degradation kinetics and mechanism, XRD and SEM studies

Procedia PDF Downloads 91
117 Analysis of Factors Influencing the Response Time of an Aspirating Gaseous Agent Concentration Detection Method

Authors: Yu Guan, Song Lu, Wei Yuan, Heping Zhang

Abstract:

Gas fire extinguishing system is widely used due to its cleanliness and efficiency, and since its spray will be affected by many factors such as convection and obstacles in jetting region, so in order to evaluate its effectiveness, detecting concentration distribution in the jetting area is indispensable, which is commonly achieved by aspirating concentration detection technique. During the concentration measurement, the response time of detector is a very important parameter, especially for those fire-extinguishing systems with rapid gas dispersion. Long response time will not only underestimate its concentration but also prolong the change of concentration with time. Therefore it is necessary to analyze the factors influencing the response time. In the paper, an aspirating concentration detection method was introduced, which is achieved by using a small critical nozzle and a laminar flowmeter, and because of the response time is mainly related to the gas transport process from sampling site to the sensor, the effects of exhaust pipe size, gas flow rate, and gas concentration on its response time were analyzed. During the research, Bromotrifluoromethane (CBrF₃) was used. The effect of the sampling tube was investigated with different length of 1, 2, 3, 4 and 5 m (5mm in pipe diameter) and different pipe diameter of 3, 4, 5, 6 and 8 mm (3m in length). The effect of gas flow rate was analyzed by changing the throat diameter of the critical nozzle with 0.5, 0.682, 0.75, 0.8, 0.84 and 0.88 mm. The effect of gas concentration on response time was studied with the concentration range of 0-25%. The result showed that the response time increased with the increase of both the length and diameter of the sampling pipe, and the effect of length on response time was linear, but for the effect of diameter, it was exponential. It was also found that as the throat diameter of critical nozzle increased, the response time reduced a lot, in other words, gas flow rate has a great influence on response time. For the effect of gas concentration, the response time increased with the increase of the CBrF₃ concentration, and the slope of the curve was reduced.

Keywords: aspirating concentration detection, fire extinguishing, gaseous agent, response time

Procedia PDF Downloads 244
116 Usability Evaluation of a Self-Report Mobile App for COVID-19 Symptoms: Supporting Health Monitoring in the Work Context

Authors: Kevin Montanez, Patricia Garcia

Abstract:

The confinement and restrictions adopted to avoid an exponential spread of the COVID-19 have negatively impacted the Peruvian economy. In this context, Industries offering essential products could continue operating, but they have to follow safety protocols and implement strategies to ensure employee health. In view of the increasing internet access and mobile phone ownership, “Alerta Temprana”, a mobile app, was developed to self-report COVID-19 symptoms in the work context. In this study, the usability of the mobile app “Alerta Temprana” was evaluated from the perspective of health monitors and workers. In addition to reporting the metrics related to the usability of the application, the utility of the system is also evaluated from the monitors' perspective. In this descriptive study, the participants used the mobile app for two months. Afterwards, System Usability Scale (SUS) questionnaire was answered by the workers and monitors. A Usefulness questionnaire with open questions was also used for the monitors. The data related to the use of the application was collected during one month. Furthermore, descriptive statistics and bivariate analysis were used. The workers rated the application as good (70.39). In the case of the monitors, usability was excellent (83.0). The most important feature for the monitors were the emails generated by the application. The average interaction per user was 30 seconds and a total of 6172 self-reports were sent. Finally, a statistically significant association was found between the acceptability scale and the work area. The results of this study suggest that Alerta Temprana has the potential to be used for surveillance and health monitoring in any context of face-to-face modality. Participants reported a high degree of ease of use. However, from the perspective of workers, SUS cannot diagnose usability issues and we suggest we use another standard usability questionnaire to improve "Alerta Temprana" for future use.

Keywords: public health in informatics, mobile app, usability, self-report

Procedia PDF Downloads 71