Search results for: multivariate time series data
36529 A Procedure for Post-Earthquake Damage Estimation Based on Detection of High-Frequency Transients
Authors: Aleksandar Zhelyazkov, Daniele Zonta, Helmut Wenzel, Peter Furtner
Abstract:
In the current research structural health monitoring is considered for addressing the critical issue of post-earthquake damage detection. A non-standard approach for damage detection via acoustic emission is presented - acoustic emissions are monitored in the low frequency range (up to 120 Hz). Such emissions are termed high-frequency transients. Further a damage indicator defined as the Time-Ratio Damage Indicator is introduced. The indicator relies on time-instance measurements of damage initiation and deformation peaks. Based on the time-instance measurements a procedure for estimation of the maximum drift ratio is proposed. Monitoring data is used from a shaking-table test of a full-scale reinforced concrete bridge pier. Damage of the experimental column is successfully detected and the proposed damage indicator is calculated.Keywords: acoustic emission, damage detection, shaking table test, structural health monitoring
Procedia PDF Downloads 23136528 Regional Changes under Extreme Meteorological Events
Authors: Renalda El Samra, Elie Bou-Zeid, Hamza Kunhu Bangalath, Georgiy Stenchikov, Mutasem El Fadel
Abstract:
The regional-scale impact of climate change over complex terrain was examined through high-resolution dynamic downscaling conducted using the Weather Research and Forecasting (WRF) model, with initial and boundary conditions from a High-Resolution Atmospheric Model (HiRAM). The analysis was conducted over the eastern Mediterranean, with a focus on the country of Lebanon, which is characterized by a challenging complex topography that magnifies the effect of orographic precipitation. Four year-long WRF simulations, selected based on HiRAM time series, were performed to generate future climate projections of extreme temperature and precipitation over the study area under the conditions of the Representative Concentration Pathway (RCP) 4.5. One past WRF simulation year, 2008, was selected as a baseline to capture dry extremes of the system. The results indicate that the study area might be exposed to a temperature increase between 1.0 and 3ºC in summer mean values by 2050, in comparison to 2008. For extreme years, the decrease in average annual precipitation may exceed 50% at certain locations in comparison to 2008.Keywords: HiRAM, regional climate modeling, WRF, Representative Concentration Pathway (RCP)
Procedia PDF Downloads 39736527 Data-driven Decision-Making in Digital Entrepreneurship
Authors: Abeba Nigussie Turi, Xiangming Samuel Li
Abstract:
Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.Keywords: startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship
Procedia PDF Downloads 32936526 Efficacy of TiO₂ in the Removal of an Acid Dye by Photo Catalytic Degradation
Authors: Laila Mahtout, Kerami Ahmed, Rabhi Souhila
Abstract:
The objective of this work is to reduce the impact on the environment of an acid dye (Black Eriochrome T) using catalytic photo-degradation in the presence of the semiconductor powder (TiO₂) previously characterized. A series of tests have been carried out in order to demonstrate the influence of certain parameters on the degree of dye degradation by titanium dioxide in the presence of UV rays, such as contact time, the powder mass and the pH of the solution. X-ray diffraction analysis of the powder showed that the anatase structure is predominant and the rutile phase is presented by peaks of low intensity. The various chemical groups which characterize the presence of the bands corresponding to the anatase and rutile form and other chemical functions have been detected by the Fourier Transform Infrared spectroscopy. The photo degradation of the NET by TiO₂ is very interesting because it gives encouraging results. The study of photo-degradation at different concentrations of the dye showed that the lower concentrations give better removal rates. The degree of degradation of the dye increases with increasing pH; it reaches the maximum value at pH = 9. The ideal mass of TiO₂ which gives the high removal rate is 1.2 g/l. Thermal treatment of TiO₂ with the addition of CuO with contents of 5%, 10%, and 15% respectively gives better results of degradation of the NET dye. The high percentage of elimination is observed at a CuO content of 15%.Keywords: acid dye, ultraviolet rays, degradation, photocatalyse
Procedia PDF Downloads 19436525 Chat-Based Online Counseling for Enhancing Wellness of Undergraduates with Emotional Crisis Tendency
Authors: Arunya Tuicomepee
Abstract:
During the past two decades, there have been the increasing numbers of studies on online counseling, especially among adolescents who are familiar with the online world. This can be explained by the fact that via this channel enables easier access to the young, who may not be ready for face-to-face service, possibly due to uneasiness to reveal their personal problems with a stranger, the feeling that their problems are to be shamed, or the need to protect their images. Especially, the group of teenagers prone to suicide or despair, who tend to keep things to or isolate from the society to themselves, usually prefer types of services that require no face-to-face encounter and allow their anonymity, such as online services. This study aimed to examine effectiveness of chat-based online counseling for enhancing wellness of undergraduates with emotional crisis tendency. Experimental with pretest-posttest control group design was employed. Participants were 47 undergraduates (10 males and 37 females) with high emotional crisis tendency. They were randomly assigned to experimental group (24 students) and control group (23 students). Participants in the experimental group received a 60-minute, 4-sessions of individual chat-based online counseling led by counselor. Those in control group received no counseling session. Instruments were the Emotional Crisis Scale and Wellness Scales. Two-way mixed-design multivariate analysis of variance was used for data analysis. Finding revealed that the posttest scores on wellness of those in the experimental group were higher than the scores of those in the control group. The posttest scores on emotional crisis tendency of those in the experimental group were lower than the scores of those in the control group. Hence, this study suggests chat-based online counseling services can become a helping source that increasing more adolescents would recognize and turn to in the future and that will receive more attention.Keywords: chat-based online counseling, emotional crisis, undergraduate student, wellness
Procedia PDF Downloads 24236524 Geoelectric Survey for Groundwater Potential in Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria
Authors: Ibrahim Mohammed, Suleiman Taofiq, Muhammad Naziru Yahya
Abstract:
Geoelectrical measurements using Schlumberger Vertical Electrical Sounding (VES) method were carried out in Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria, with the aim of determining the groundwater potential in the area. Twelve (12) Vertical Electric Sounding (VES) data were collected using Terrameter (ABEM SAS 300c) and analyzed using computer software (IPI2win), which gives an automatic interpretation of the apparent resistivity. The results of the interpretation of VES data were used in the characterization of three to five geo-electric layers from which the aquifer units were delineated. Data analysis indicated that water bearing formation exists in the third and fourth layers having resistivity range of 312 to 767 Ωm and 9.51 to 681 Ωm, respectively. The thickness of the formation ranges from 14.7 to 41.8 m, while the depth is from 8.22 to 53.7 m. Based on the result obtained from the interpretation of the data, five (5) VES stations were recommended as the most viable locations for groundwater exploration in the study area. The VES stations include VES A4, A5, A6, B1, and B2. The VES results of the entire area indicated that the water bearing formation occurs at maximum depth of 53.7 m at the time of this survey.Keywords: aquifer, depth, groundwater, resistivity, Schlumberger
Procedia PDF Downloads 16636523 Assessment of the Impact of Traffic Safety Policy in Barcelona, 2010-2019
Authors: Lluís Bermúdez, Isabel Morillo
Abstract:
Road safety involves carrying out a determined and explicit policy to reduce accidents. In the city of Barcelona, through the Local Road Safety Plan 2013-2018, in line with the framework that has been established at the European and state level, a series of preventive, corrective and technical measures are specified, with the priority objective of reducing the number of serious injuries and fatalities. In this work, based on the data from the accidents managed by the local police during the period 2010-2019, an analysis is carried out to verify whether the measures established in the Plan to reduce the accident rate have had an effect or not and to what extent. The analysis focuses on the type of accident and the type of vehicles involved. Different count regression models have been fitted, from which it can be deduced that the number of serious and fatal victims of the accidents that have occurred in the city of Barcelona has been reduced as the measures approved by the authorities.Keywords: accident reduction, count regression models, road safety, urban traffic
Procedia PDF Downloads 13336522 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 9236521 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism
Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape
Abstract:
Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders
Procedia PDF Downloads 2436520 Concept, Design and Implementation of Power System Component Simulator Based on Thyristor Controlled Transformer and Power Converter
Authors: B. Kędra, R. Małkowski
Abstract:
This paper presents information on Power System Component Simulator – a device designed for LINTE^2 laboratory owned by Gdansk University of Technology in Poland. In this paper, we first provide an introductory information on the Power System Component Simulator and its capabilities. Then, the concept of the unit is presented. Requirements for the unit are described as well as proposed and introduced functions are listed. Implementation details are given. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Lastly, the results of experiments performed using Power System Component Simulator are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area.Keywords: power converter, Simulink Real-Time, Matlab, load, tap controller
Procedia PDF Downloads 24236519 Considerations for the Use of High Intensity Interval Training in Secondary Physical Education
Authors: Amy Stringer, Resa Chandler
Abstract:
High Intensity Interval Training (HIIT) involves a 3-10-minute circuit of various exercises which is a viable alternative to a traditional cardiovascular and strength training regimen. Research suggests that measures of health-related fitness can either be maintained or actually improve with the use of this training method. After conducting a 6-week HIIT research study with 10-14 year old children, considerations for using a daily HIIT workout are presented. Is the use of HIIT with children a reasonable consideration for physical education programs? The benefits and challenges of this type of an intervention are identified. This study is significant in that achieving fitness gains in a small amount of daily class time is an attractive concept – especially for physical education teachers who often do not have the time necessary to accomplish all of their curricular goals in the amount of class time assigned. Basic methodologies include students participating in a circuit of exercises for 7-10 minutes at 80-95% of max heart rate as measured by heart rate monitors. Student pre and post fitness test data were collected for cardio-vascular endurance, muscular endurance, and body composition. Research notes as well as commentary by the teachers and researchers who participated in the HIIT study contributed to the understanding of the cost-benefit analysis. Major findings of the study are that HIIT has limited effectiveness but is a good choice for limited class times. Student efficacy of their ability to complete the exercises and visible heart rate data were considered to be significant factors in success of the HIIT study. The effective use of technology promoting positive audience effect during the display of heart rate data was more important at the beginning of the study than at the end. Student ‘buy-in’ and motivation, teacher motivation and ‘buy-in’, the variety of activities in the circuit and the fitness level of the student at the beginning of the study were also findings influencing the fitness outcomes of the study. Concluding Statement: High intensity interval training can be used effectively in a secondary physical education program. It is not a ‘magic bullet’ to produce health-related fitness outcomes in every student but it is an effective tool to enhance student fitness in a limited time and contribute to the goals of the program.Keywords: cardio vascular fitness, children, high intensity interval training, physical education
Procedia PDF Downloads 11436518 Volatility Switching between Two Regimes
Authors: Josip Visković, Josip Arnerić, Ante Rozga
Abstract:
Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most successful and popular models in modelling time varying volatility are GARCH type models. When financial returns exhibit sudden jumps that are due to structural breaks, standard GARCH models show high volatility persistence, i.e. integrated behaviour of the conditional variance. In such situations models in which the parameters are allowed to change over time are more appropriate. This paper compares different GARCH models in terms of their ability to describe structural changes in returns caused by financial crisis at stock markets of six selected central and east European countries. The empirical analysis demonstrates that Markov regime switching GARCH model resolves the problem of excessive persistence and outperforms uni-regime GARCH models in forecasting volatility when sudden switching occurs in response to financial crisis.Keywords: central and east European countries, financial crisis, Markov switching GARCH model, transition probabilities
Procedia PDF Downloads 22636517 Experimental Investigation of Natural Frequency and Forced Vibration of Euler-Bernoulli Beam under Displacement of Concentrated Mass and Load
Authors: Aref Aasi, Sadegh Mehdi Aghaei, Balaji Panchapakesan
Abstract:
This work aims to evaluate the free and forced vibration of a beam with two end joints subjected to a concentrated moving mass and a load using the Euler-Bernoulli method. The natural frequency is calculated for different locations of the concentrated mass and load on the beam. The analytical results are verified by the experimental data. The variations of natural frequency as a function of the location of the mass, the effect of the forced frequency on the vibrational amplitude, and the displacement amplitude versus time are investigated. It is discovered that as the concentrated mass moves toward the center of the beam, the natural frequency of the beam and the relative error between experimental and analytical data decreases. There is a close resemblance between analytical data and experimental observations.Keywords: Euler-Bernoulli beam, natural frequency, forced vibration, experimental setup
Procedia PDF Downloads 27436516 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference
Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev
Abstract:
Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.Keywords: compartmental model, climate, dengue, machine learning, social-economic
Procedia PDF Downloads 8436515 Mapping of Geological Structures Using Aerial Photography
Authors: Ankit Sharma, Mudit Sachan, Anurag Prakash
Abstract:
Rapid growth in data acquisition technologies through drones, have led to advances and interests in collecting high-resolution images of geological fields. Being advantageous in capturing high volume of data in short flights, a number of challenges have to overcome for efficient analysis of this data, especially while data acquisition, image interpretation and processing. We introduce a method that allows effective mapping of geological fields using photogrammetric data of surfaces, drainage area, water bodies etc, which will be captured by airborne vehicles like UAVs, we are not taking satellite images because of problems in adequate resolution, time when it is captured may be 1 yr back, availability problem, difficult to capture exact image, then night vision etc. This method includes advanced automated image interpretation technology and human data interaction to model structures and. First Geological structures will be detected from the primary photographic dataset and the equivalent three dimensional structures would then be identified by digital elevation model. We can calculate dip and its direction by using the above information. The structural map will be generated by adopting a specified methodology starting from choosing the appropriate camera, camera’s mounting system, UAVs design ( based on the area and application), Challenge in air borne systems like Errors in image orientation, payload problem, mosaicing and geo referencing and registering of different images to applying DEM. The paper shows the potential of using our method for accurate and efficient modeling of geological structures, capture particularly from remote, of inaccessible and hazardous sites.Keywords: digital elevation model, mapping, photogrammetric data analysis, geological structures
Procedia PDF Downloads 68636514 Development of Automatic Laser Scanning Measurement Instrument
Authors: Chien-Hung Liu, Yu-Fen Chen
Abstract:
This study used triangular laser probe and three-axial direction mobile platform for surface measurement, programmed it and applied it to real-time analytic statistics of different measured data. This structure was used to design a system integration program: using triangular laser probe for scattering or reflection non-contact measurement, transferring the captured signals to the computer through RS-232, and using RS-485 to control the three-axis platform for a wide range of measurement. The data captured by the laser probe are formed into a 3D surface. This study constructed an optical measurement application program in the concept of visual programming language. First, the signals are transmitted to the computer through RS-232/RS-485, and then the signals are stored and recorded in graphic interface timely. This programming concept analyzes various messages, and makes proper presentation graphs and data processing to provide the users with friendly graphic interfaces and data processing state monitoring, and identifies whether the present data are normal in graphic concept. The major functions of the measurement system developed by this study are thickness measurement, SPC, surface smoothness analysis, and analytical calculation of trend line. A result report can be made and printed promptly. This study measured different heights and surfaces successfully, performed on-line data analysis and processing effectively, and developed a man-machine interface for users to operate.Keywords: laser probe, non-contact measurement, triangulation measurement principle, statistical process control, labVIEW
Procedia PDF Downloads 36036513 Need for Privacy in the Technological Era: An Analysis in the Indian Perspective
Authors: Amrashaa Singh
Abstract:
In the digital age and the large cyberspace, Data Protection and Privacy have become major issues in this technological era. There was a time when social media and online shopping websites were treated as a blessing for the people. But now the tables have turned, and the people have started to look at them with suspicion. They are getting aware of the privacy implications, and they do not feel as safe as they used to initially. When Edward Snowden informed the world about the snooping United States Security Agencies had been doing, that is when the picture became clear for the people. After the Cambridge Analytica case where the data of Facebook users were stored without their consent, the doubts arose in the minds of people about how safe they actually are. In India, the case of spyware Pegasus also raised a lot of concerns. It was used to snoop on a lot of human right activists and lawyers and the company which invented the spyware claims that it only sells it to the government. The paper will be dealing with the privacy concerns in the Indian perspective with an analytical methodology. The Supreme Court here had recently declared a right to privacy a Fundamental Right under Article 21 of the Constitution of India. Further, the Government is also working on the Data Protection Bill. The point to note is that India is still a developing country, and with the bill, the government aims at data localization. But there are doubts in the minds of many people that the Government would actually be snooping on the data of the individuals. It looks more like an attempt to curb dissenters ‘lawfully’. The focus of the paper would be on these issues in India in light of the European Union (EU) General Data Protection Regulation (GDPR). The Indian Data Protection Bill is also said to be loosely based on EU GDPR. But how helpful would these laws actually be is another concern since the economic and social conditions in both countries are very different? The paper aims at discussing these concerns, how good or bad is the intention of the government behind the bill, and how the nations can act together and draft common regulations so that there is some uniformity in the laws and their application.Keywords: Article 21, data protection, dissent, fundamental right, India, privacy
Procedia PDF Downloads 11436512 Finite Element Approach to Evaluate Time Dependent Shear Behavior of Connections in Hybrid Steel-PC Girder under Sustained Loading
Authors: Mohammad Najmol Haque, Takeshi Maki, Jun Sasaki
Abstract:
Headed stud shear connections are widely used in the junction or embedded zone of hybrid girder to achieve whole composite action with continuity that can sustain steel-concrete interfacial tensile and shear forces. In Japan, Japan Road Association (JRA) specifications are used for hybrid girder design that utilizes very low level of stud capacity than those of American Institute of Steel Construction (AISC) specifications, Japan Society of Civil Engineers (JSCE) specifications and EURO code. As low design shear strength is considered in design of connections, the time dependent shear behavior due to sustained external loading is not considered, even not fully studied. In this study, a finite element approach was used to evaluate the time dependent shear behavior for headed studs used as connections at the junction. This study clarified, how the sustained loading distinctively impacted on changing the interfacial shear of connections with time which was sensitive to lodging history, positions of flanges, neighboring studs, position of prestress bar and reinforcing bar, concrete strength, etc. and also identified a shear influence area. Stud strength was also confirmed through pushout tests. The outcome obtained from the study may provide an important basis and reference data in designing connections of hybrid girders with enhanced stud capacity with due consideration of their long-term shear behavior.Keywords: finite element, hybrid girder, shear connections, sustained loading, time dependent behavior
Procedia PDF Downloads 13536511 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies
Authors: Pragati Sirohi, Vivek Singh Rana
Abstract:
Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.Keywords: brand value, FMCG, market capitalization, net worth
Procedia PDF Downloads 35636510 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors
Authors: Jakob Krause
Abstract:
Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling
Procedia PDF Downloads 14836509 Power Ultrasound Application on Convective Drying of Banana (Musa paradisiaca), Mango (Mangifera indica L.) and Guava (Psidium guajava L.)
Authors: Erika K. Méndez, Carlos E. Orrego, Diana L. Manrique, Juan D. Gonzalez, Doménica Vallejo
Abstract:
High moisture content in fruits generates post-harvest problems such as mechanical, biochemical, microbial and physical losses. Dehydration, which is based on the reduction of water activity of the fruit, is a common option for overcoming such losses. However, regular hot air drying could affect negatively the quality properties of the fruit due to the long residence time at high temperature. Power ultrasound (US) application during the convective drying has been used as a novel method able to enhance drying rate and, consequently, to decrease drying time. In the present study, a new approach was tested to evaluate the effect of US on the drying time, the final antioxidant activity (AA) and the total polyphenol content (TPC) of banana slices (BS), mango slices (MS) and guava slices (GS). There were also studied the drying kinetics with nine different models from which water effective diffusivities (Deff) (with or without shrinkage corrections) were calculated. Compared with the corresponding control tests, US assisted drying for fruit slices showed reductions in drying time between 16.23 and 30.19%, 11.34 and 32.73%, and 19.25 and 47.51% for the MS, BS and GS respectively. Considering shrinkage effects, Deff calculated values ranged from 1.67*10-10 to 3.18*10-10 m2/s, 3.96*10-10 and 5.57*10-10 m2/s and 4.61*10-10 to 8.16*10-10 m2/s for the BS, MS and GS samples respectively. Reductions of TPC and AA (as DPPH) were observed compared with the original content in fresh fruit data in all kinds of drying assays.Keywords: banana, drying, effective diffusivity, guava, mango, ultrasound
Procedia PDF Downloads 53536508 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 26236507 Climate Change Scenario Phenomenon in Malaysia: A Case Study in MADA Area
Authors: Shaidatul Azdawiyah Abdul Talib, Wan Mohd Razi Idris, Liew Ju Neng, Tukimat Lihan, Muhammad Zamir Abdul Rasid
Abstract:
Climate change has received great attention worldwide due to the impact of weather causing extreme events. Rainfall and temperature are crucial weather components associated with climate change. In Malaysia, increasing temperatures and changes in rainfall distribution patterns lead to drought and flood events involving agricultural areas, especially rice fields. Muda Agricultural Development Authority (MADA) is the largest rice growing area among the 10 granary areas in Malaysia and has faced floods and droughts in the past due to changing climate. Changes in rainfall and temperature patter affect rice yield. Therefore, trend analysis is important to identify changes in temperature and rainfall patterns as it gives an initial overview for further analysis. Six locations across the MADA area were selected based on the availability of meteorological station (MetMalaysia) data. Historical data (1991 to 2020) collected from MetMalaysia and future climate projection by multi-model ensemble of climate model from CMIP5 (CNRM-CM5, GFDL-CM3, MRI-CGCM3, NorESM1-M and IPSL-CM5A-LR) have been analyzed using Mann-Kendall test to detect the time series trend, together with standardized precipitation anomaly, rainfall anomaly index, precipitation concentration index and temperature anomaly. Future projection data were analyzed based on 3 different periods; early century (2020 – 2046), middle century (2047 – 2073) and late-century (2074 – 2099). Results indicate that the MADA area does encounter extremely wet and dry conditions, leading to drought and flood events in the past. The Mann-Kendall (MK) trend analysis test discovered a significant increasing trend (p < 0.05) in annual rainfall (z = 0.40; s = 15.12) and temperature (z = 0.61; s = 0.04) during the historical period. Similarly, for both RCP 4.5 and RCP 8.5 scenarios, a significant increasing trend (p < 0.05) was found for rainfall (RCP 4.5: z = 0.15; s = 2.55; RCP 8.5: z = 0.41; s = 8.05;) and temperature (RCP 4.5: z = 0.84; s = 0.02; RCP 8.5: z = 0.94; s = 0.05). Under the RCP 4.5 scenario, the average temperature is projected to increase up to 1.6 °C in early century, 2.0 °C in the middle century and 2.4 °C in the late century. In contrast, under RCP 8.5 scenario, the average temperature is projected to increase up to 1.8 °C in the early century, 3.1 °C in the middle century and 4.3 °C in late century. Drought is projected to occur in 2038 and 2043 (early century); 2052 and 2069 (middle century); and 2095, 2097 to 2099 (late century) under RCP 4.5 scenario. As for RCP 8.5 scenario, drought is projected to occur in 2021, 2031 and 2034 (early century); and 2069 (middle century). No drought is projected to occur in the late century under the RCP 8.5 scenario. Thus, this information can be used for the analysis of the impact of climate change scenarios on rice growth and yield besides other crops found in MADA area. Additionally, this study, it would be helpful for researchers and decision-makers in developing applicable adaptation and mitigation strategies to reduce the impact of climate change.Keywords: climate projection, drought, flood, rainfall, RCP 4.5, RCP 8.5, temperature
Procedia PDF Downloads 7736506 Intelligent Electric Vehicle Charging System (IEVCS)
Authors: Prateek Saxena, Sanjeev Singh, Julius Roy
Abstract:
The security of the power distribution grid remains a paramount to the utility professionals while enhancing and making it more efficient. The most serious threat to the system can be maintaining the transformers, as the load is ever increasing with the addition of elements like electric vehicles. In this paper, intelligent transformer monitoring and grid management has been proposed. The engineering is done to use the evolving data from the smart meter for grid analytics and diagnostics for preventive maintenance. The two-tier architecture for hardware and software integration is coupled to form a robust system for the smart grid. The proposal also presents interoperable meter standards for easy integration. Distribution transformer analytics based on real-time data benefits utilities preventing outages, protects the revenue loss, improves the return on asset and reduces overall maintenance cost by predictive monitoring.Keywords: electric vehicle charging, transformer monitoring, data analytics, intelligent grid
Procedia PDF Downloads 79136505 Vehicle Routing Problem with Mixed Fleet of Conventional and Heterogenous Electric Vehicles and Time Dependent Charging Costs
Authors: Ons Sassi, Wahiba Ramdane Cherif-Khettaf, Ammar Oulamara
Abstract:
In this paper, we consider a new real-life Heterogenous Electric Vehicle Routing Problem with Time Dependant Charging Costs and a Mixed Fleet (HEVRP-TDMF), in which a set of geographically scattered customers have to be served by a mixed fleet of vehicles composed of a heterogenous fleet of Electric Vehicles (EVs), having different battery capacities and operating costs, and Conventional Vehicles (CVs). We include the possibility of charging EVs in the available charging stations during the routes in order to serve all customers. Each charging station offers charging service with a known technology of chargers and time-dependent charging costs. Charging stations are also subject to operating time windows constraints. EVs are not necessarily compatible with all available charging technologies and a partial charging is allowed. Intermittent charging at the depot is also allowed provided that constraints related to the electricity grid are satisfied. The objective is to minimize the number of employed vehicles and then minimize the total travel and charging costs. In this study, we present a Mixed Integer Programming Model and develop a Charging Routing Heuristic and a Local Search Heuristic based on the Inject-Eject routine with three different insertion strategies. All heuristics are tested on real data instances.Keywords: charging problem, electric vehicle, heuristics, local search, optimization, routing problem
Procedia PDF Downloads 46336504 Cryptographic Protocol for Secure Cloud Storage
Authors: Luvisa Kusuma, Panji Yudha Prakasa
Abstract:
Cloud storage, as a subservice of infrastructure as a service (IaaS) in Cloud Computing, is the model of nerworked storage where data can be stored in server. In this paper, we propose a secure cloud storage system consisting of two main components; client as a user who uses the cloud storage service and server who provides the cloud storage service. In this system, we propose the protocol schemes to guarantee against security attacks in the data transmission. The protocols are login protocol, upload data protocol, download protocol, and push data protocol, which implement hybrid cryptographic mechanism based on data encryption before it is sent to the cloud, so cloud storage provider does not know the user's data and cannot analysis user’s data, because there is no correspondence between data and user.Keywords: cloud storage, security, cryptographic protocol, artificial intelligence
Procedia PDF Downloads 35736503 Mapping the Pain Trajectory of Breast Cancer Survivors: Results from a Retrospective Chart Review
Authors: Wilfred Elliam
Abstract:
Background: Pain is a prevalent and debilitating symptom among breast cancer patients, impacting their quality of life and overall well-being. The experience of pain in this population is multifaceted, influenced by a combination of disease-related factors, treatment side effects, and individual characteristics. Despite advancements in cancer treatment and pain management, many breast cancer patients continue to suffer from chronic pain, which can persist long after the completion of treatment. Understanding the progression of pain in breast cancer patients over time and identifying its correlates is crucial for effective pain management and supportive care strategies. The purpose of this research is to understand the patterns and progression of pain experienced by breast cancer survivors over time. Methods: Data were collected from breast cancer patients at Hartford Hospital at four time points: baseline, 3, 6 and 12 weeks. Key variables measured include pain, body mass index (BMI), fatigue, musculoskeletal pain, sleep disturbance, and demographic variables (age, employment status, cancer stage, and ethnicity). Binomial generalized linear mixed models were used to examine changes in pain and symptoms over time. Results: A total of 100 breast cancer patients aged 18 years old were included in the analysis. We found that the effect of time on pain (p = 0.024), musculoskeletal pain (p= <0.001), fatigue (p= <0.001), and sleep disturbance (p-value = 0.013) were statistically significant with pain progression in breast cancer patients. Patients using aromatase inhibitors have worse fatigue (<0.05) and musculoskeletal pain (<0.001) compared to patients with Tamoxifen. Patients who are obese (<0.001) and overweight (<0.001) are more likely to report pain compared to patients with normal weight. Conclusion: This study revealed the complex interplay between various factors such as time, pain, sleep disturbance in breast cancer patient. Specifically, pain, musculoskeletal pain, sleep disturbance, fatigue exhibited significant changes across the measured time points, indicating a dynamic pain progression in these patients. The findings provide a foundation for future research and targeted interventions aimed at improving pain in breast cancer patient outcomes.Keywords: breast cancer, chronic pain, pain management, quality of life
Procedia PDF Downloads 3136502 Purity Monitor Studies in Medium Liquid Argon TPC
Authors: I. Badhrees
Abstract:
This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of laser in the Liquid Argon Time Projection Chamber. The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432 pb. The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.Keywords: ATLAS, CERN, KACST, LArTPC, particle physics
Procedia PDF Downloads 34636501 Aggregation Scheduling Algorithms in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.Keywords: data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional
Procedia PDF Downloads 22936500 Crowdsensing Project in the Brazilian Municipality of Florianópolis for the Number of Visitors Measurement
Authors: Carlos Roberto De Rolt, Julio da Silva Dias, Rafael Tezza, Luca Foschini, Matteo Mura
Abstract:
The seasonal population fluctuation presents a challenge to touristic cities since the number of inhabitants can double according to the season. The aim of this work is to develop a model that correlates the waste collected with the population of the city and also allow cooperation between the inhabitants and the local government. The model allows public managers to evaluate the impact of the seasonal population fluctuation on waste generation and also to improve planning resource utilization throughout the year. The study uses data from the company that collects the garbage in Florianópolis, a Brazilian city that presents the profile of a city that attracts tourists due to numerous beaches and warm weather. The fluctuations are caused by the number of people that come to the city throughout the year for holidays, summer time vacations or business events. Crowdsensing will be accomplished through smartphones with access to an app for data collection, with voluntary participation of the population. Crowdsensing participants can access information collected in waves for this portal. Crowdsensing represents an innovative and participatory approach which involves the population in gathering information to improve the quality of life. The management of crowdsensing solutions plays an essential role given the complexity to foster collaboration, establish available sensors and collect and process the collected data. Practical implications of this tool described in this paper refer, for example, to the management of seasonal tourism in a large municipality, whose public services are impacted by the floating of the population. Crowdsensing and big data support managers in predicting the arrival, permanence, and movement of people in a given urban area. Also, by linking crowdsourced data to databases from other public service providers - e.g., water, garbage collection, electricity, public transport, telecommunications - it is possible to estimate the floating of the population of an urban area affected by seasonal tourism. This approach supports the municipality in increasing the effectiveness of resource allocation while, at the same time, increasing the quality of the service as perceived by citizens and tourists.Keywords: big data, dashboards, floating population, smart city, urban management solutions
Procedia PDF Downloads 287