Search results for: ArcGIS data analysis
40816 Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques
Authors: Songul Cinaroglu
Abstract:
Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.Keywords: public hospital unions, efficiency, data envelopment analysis, random forest
Procedia PDF Downloads 12640815 Automated Multisensory Data Collection System for Continuous Monitoring of Refrigerating Appliances Recycling Plants
Authors: Georgii Emelianov, Mikhail Polikarpov, Fabian Hübner, Jochen Deuse, Jochen Schiemann
Abstract:
Recycling refrigerating appliances plays a major role in protecting the Earth's atmosphere from ozone depletion and emissions of greenhouse gases. The performance of refrigerator recycling plants in terms of material retention is the subject of strict environmental certifications and is reviewed periodically through specialized audits. The continuous collection of Refrigerator data required for the input-output analysis is still mostly manual, error-prone, and not digitalized. In this paper, we propose an automated data collection system for recycling plants in order to deduce expected material contents in individual end-of-life refrigerating appliances. The system utilizes laser scanner measurements and optical data to extract attributes of individual refrigerators by applying transfer learning with pre-trained vision models and optical character recognition. Based on Recognized features, the system automatically provides material categories and target values of contained material masses, especially foaming and cooling agents. The presented data collection system paves the way for continuous performance monitoring and efficient control of refrigerator recycling plants.Keywords: automation, data collection, performance monitoring, recycling, refrigerators
Procedia PDF Downloads 16240814 Behavioral Analysis of Stock Using Selective Indicators from Fundamental and Technical Analysis
Authors: Vish Putcha, Chandrasekhar Putcha, Siva Hari
Abstract:
In the current digital era of free trading and pandemic-driven remote work culture, markets worldwide gained momentum for retail investors to trade from anywhere easily. The number of retail traders rose to 24% of the market from 15% at the pre-pandemic level. Most of them are young retail traders with high-risk tolerance compared to the previous generation of retail traders. This trend boosted the growth of subscription-based market predictors and market data vendors. Young traders are betting on these predictors, assuming one of them is correct. However, 90% of retail traders are on the losing end. This paper presents multiple indicators and attempts to derive behavioral patterns from the underlying stocks. The two major indicators that traders and investors follow are technical and fundamental. The famous investor, Warren Buffett, adheres to the “Value Investing” method that is based on a stock’s fundamental Analysis. In this paper, we present multiple indicators from various methods to understand the behavior patterns of stocks. For this research, we picked five stocks with a market capitalization of more than $200M, listed on the exchange for more than 20 years, and from different industry sectors. To study the behavioral pattern over time for these five stocks, a total of 8 indicators are chosen from fundamental, technical, and financial indicators, such as Price to Earning (P/E), Price to Book Value (P/B), Debt to Equity (D/E), Beta, Volatility, Relative Strength Index (RSI), Moving Averages and Dividend yields, followed by detailed mathematical Analysis. This is an interdisciplinary paper between various disciplines of Engineering, Accounting, and Finance. The research takes a new approach to identify clear indicators affecting stocks. Statistical Analysis of the data will be performed in terms of the probabilistic distribution, then follow and then determine the probability of the stock price going over a specific target value. The Chi-square test will be used to determine the validity of the assumed distribution. Preliminary results indicate that this approach is working well. When the complete results are presented in the final paper, they will be beneficial to the community.Keywords: stock pattern, stock market analysis, stock predictions, trading, investing, fundamental analysis, technical analysis, quantitative trading, financial analysis, behavioral analysis
Procedia PDF Downloads 8540813 Sentiment Analysis on the East Timor Accession Process to the ASEAN
Authors: Marcelino Caetano Noronha, Vosco Pereira, Jose Soares Pinto, Ferdinando Da C. Saores
Abstract:
One particularly popular social media platform is Youtube. It’s a video-sharing platform where users can submit videos, and other users can like, dislike or comment on the videos. In this study, we conduct a binary classification task on YouTube’s video comments and review from the users regarding the accession process of Timor Leste to become the eleventh member of the Association of South East Asian Nations (ASEAN). We scrape the data directly from the public YouTube video and apply several pre-processing and weighting techniques. Before conducting the classification, we categorized the data into two classes, namely positive and negative. In the classification part, we apply Support Vector Machine (SVM) algorithm. By comparing with Naïve Bayes Algorithm, the experiment showed SVM achieved 84.1% of Accuracy, 94.5% of Precision, and Recall 73.8% simultaneously.Keywords: classification, YouTube, sentiment analysis, support sector machine
Procedia PDF Downloads 10740812 Investigating Breakdowns in Human Robot Interaction: A Conversation Analysis Guided Single Case Study of a Human-Robot Communication in a Museum Environment
Authors: B. Arend, P. Sunnen, P. Caire
Abstract:
In a single case study, we show how a conversation analysis (CA) approach can shed light onto the sequential unfolding of human-robot interaction. Relying on video data, we are able to show that CA allows us to investigate the respective turn-taking systems of humans and a NAO robot in their dialogical dynamics, thus pointing out relevant differences. Our fine grained video analysis points out occurring breakdowns and their overcoming, when humans and a NAO-robot engage in a multimodally uttered multi-party communication during a sports guessing game. Our findings suggest that interdisciplinary work opens up the opportunity to gain new insights into the challenging issues of human robot communication in order to provide resources for developing mechanisms that enable complex human-robot interaction (HRI).Keywords: human robot interaction, conversation analysis, dialogism, breakdown, museum
Procedia PDF Downloads 30340811 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories
Authors: Berna Çalışkan
Abstract:
The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.Keywords: water resources management, hydro tool, water protection, transportation
Procedia PDF Downloads 5640810 Association Rules Mining and NOSQL Oriented Document in Big Data
Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub
Abstract:
Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.Keywords: Apriori, Association rules mining, Big Data, Data Mining, Hadoop, MapReduce, MongoDB, NoSQL
Procedia PDF Downloads 15840809 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence
Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti
Abstract:
In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.Keywords: collective intelligence, data framework, destination management, smart tourism
Procedia PDF Downloads 12040808 Percentile Norms of Heart Rate Variability (HRV) of Indian Sportspersons Withdrawn from Competitive Games and Sports
Authors: Pawan Kumar, Dhananjoy Shaw
Abstract:
Heart rate variability (HRV) is the physiological phenomenon of variation in the time interval between heartbeats and is alterable with fitness, age and different medical conditions including withdrawal/retirement from games/sports. Objectives of the study were to develop (a) percentile norms of heart rate variability (HRV) variables derived from time domain analysis of the Indian sportspersons withdrawn from competitive games/sports pertaining to sympathetic and parasympathetic activity (b) percentile norms of heart rate variability (HRV) variables derived from frequency domain analysis of the Indian sportspersons withdrawn from competitive games/sports pertaining to sympathetic and parasympathetic activity. The study was conducted on 430 males. Ages of the sample ranged from 30 to 35 years of same socio-economic status. Date was collected using ECG polygraphs. Data were processed and extracted using frequency domain analysis and time domain analysis. Collected data were computed with percentile from one to hundred. The finding showed that the percentile norms of heart rate variability (HRV) variables derived from time domain analysis of the Indian sportspersons withdrawn from competitive games/sports pertaining to sympathetic and parasympathetic activity namely, NN50 count (ranged from 1 to 189 score as percentile range). pNN50 count (ranged from .24 to 60.80 score as percentile range). SDNN (ranged from 17.34 to 167.29 score as percentile range). SDSD (ranged from 11.14 to 120.46 score as percentile range). RMMSD (ranged from 11.19 to 120.24 score as percentile range) and SDANN (ranged from 4.02 to 88.75 score as percentile range). The percentile norms of heart rate variability (HRV) variables derived from frequency domain analysis of the Indian sportspersons withdrawn from competitive games/sports pertaining to sympathetic and parasympathetic activity namely Low Frequency (Normalized Power) ranged from 20.68 to 90.49 score as percentile range. High Frequency (Normalized Power) ranged from 14.37 to 81.60 score as percentile range. LF/ HF ratio(ranged from 0.26 to 9.52 score as percentile range). LF (Absolute Power) ranged from 146.79 to 5669.33 score as percentile range. HF (Absolute Power) ranged from 102.85 to 10735.71 score as percentile range and Total Power (Absolute Power) ranged from 471.45 to 25879.23 score as percentile range. Conclusion: The analysis documented percentile norms for time domain analysis and frequency domain analysis for versatile use and evaluation.Keywords: RMSSD, Percentile, SDANN, HF, LF
Procedia PDF Downloads 41940807 Immunization-Data-Quality in Public Health Facilities in the Pastoralist Communities: A Comparative Study Evidence from Afar and Somali Regional States, Ethiopia
Authors: Melaku Tsehay
Abstract:
The Consortium of Christian Relief and Development Associations (CCRDA), and the CORE Group Polio Partners (CGPP) Secretariat have been working with Global Alliance for Vac-cines and Immunization (GAVI) to improve the immunization data quality in Afar and Somali Regional States. The main aim of this study was to compare the quality of immunization data before and after the above interventions in health facilities in the pastoralist communities in Ethiopia. To this end, a comparative-cross-sectional study was conducted on 51 health facilities. The baseline data was collected in May 2019, while the end line data in August 2021. The WHO data quality self-assessment tool (DQS) was used to collect data. A significant improvment was seen in the accuracy of the pentavalent vaccine (PT)1 (p = 0.012) data at the health posts (HP), while PT3 (p = 0.010), and Measles (p = 0.020) at the health centers (HC). Besides, a highly sig-nificant improvment was observed in the accuracy of tetanus toxoid (TT)2 data at HP (p < 0.001). The level of over- or under-reporting was found to be < 8%, at the HP, and < 10% at the HC for PT3. The data completeness was also increased from 72.09% to 88.89% at the HC. Nearly 74% of the health facilities timely reported their respective immunization data, which is much better than the baseline (7.1%) (p < 0.001). These findings may provide some hints for the policies and pro-grams targetting on improving immunization data qaulity in the pastoralist communities.Keywords: data quality, immunization, verification factor, pastoralist region
Procedia PDF Downloads 11940806 Tourism Satellite Account: Approach and Information System Development
Authors: Pappas Theodoros, Mihail Diakomihalis
Abstract:
Measuring the economic impact of tourism in a benchmark economy is a global concern, with previous measurements being partial and not fully integrated. Tourism is a phenomenon that requires individual consumption of visitors and which should be observed and measured to reveal, thus, the overall contribution of tourism to an economy. The Tourism Satellite Account (TSA) is a critical tool for assessing the annual growth of tourism, providing reliable measurements. This article introduces a system of TSA information that encompasses all the works of the TSA, including input, storage, management, and analysis of data, as well as additional future functions and enhances the efficiency of tourism data management and TSA collection utility. The methodology and results presented offer insights into the development and implementation of TSA.Keywords: tourism satellite account, information system, data-based tourist account, relation database
Procedia PDF Downloads 8140805 An Analysis of the Need of Training for Indian Textile Manufacturing Sector
Authors: Shipra Sharma, Jagat Jerath
Abstract:
Human resource training is an essential element of talent management in the current era of global competitiveness and dynamic trade in the manufacturing industry. Globally, India is behind only China as the largest textile manufacturer. The major challenges faced by the Indian textile manufacturing Industry are low technology levels, growing skill gaps, unorganized structure, lower efficiencies, etc. indicating the need for constant talent up-gradation. Assessment of training needs from a strategic perspective is an essential step for the formulation of effective training. The paper established the significance of training in the Indian textile industry and to determine the training needs on various parameters as presented. 40 HR personnel/s working in the textile and apparel companies based in the industrial region of Punjab, India, were the respondents for the study. The research tool used in this case was a structured questionnaire as per five-point Likert scale. Statistical analysis through descriptive statistics and chi-square test indicated the increased need for training whenever there were technical changes in the organizations. As per the data presented in this study, most of the HR personnel/s agreed that the variables associated with organizational analysis, task analysis, and individual analysis have a statistically significant role to play in determining the need for training in an organization.Keywords: Indian textile manufacturing industry, significance of training, training needs analysis, parameters for training needs assessment
Procedia PDF Downloads 16140804 An Epidemiological Analysis of the Occurrence of Bovine Brucellosis and Adopted Control Measures in South Africa during the Period 2014 to 2019
Authors: Emily Simango, T. Chitura
Abstract:
Background: Bovine brucellosis is among the most neglected zoonotic diseases in developing countries, where it is endemic and a growing challenge to public health. The development of cost-effective control measures for the disease can only be affirmed by the knowledge of the disease epidemiology and the ability to define its risk profiles. The aim of the study was to document the trend of bovine brucellosis and the control measures adopted following reported cases during the period 2014 to 2019 in South Africa. Methods: Data on confirmed cases of bovine brucellosis was retrieved from the website of the World Organisation of Animal Health (WOAH). Data was analysed using the Statistical Package for Social Sciences (IBM SPSS, 2022) version 29.0. Descriptive analysis (frequencies and percentages) and the Analysis of variance (ANOVA) were utilized for statistical significance (p<0.05). Results: The data retrieved in our study revealed an overall average bovine brucellosis prevalence of 8.48. There were statistically significant differences in bovine brucellosis prevalence across the provinces for the years 2016 and 2019 (p≥0.05), with the Eastern Cape Province having the highest prevalence in both instances. Documented control measures for the disease were limited to killing and disposal of disease cases as well as vaccination of susceptible animals. Conclusion: Bovine brucellosis is real in South Africa, with the risk profiles differing across the provinces. Information on brucellosis control measures in South Africa, as reported to the WOAH, is not comprehensive.Keywords: zoonotic, endemic, Eastern Cape province, vaccination
Procedia PDF Downloads 6540803 CoP-Networks: Virtual Spaces for New Faculty’s Professional Development in the 21st Higher Education
Authors: Eman AbuKhousa, Marwan Z. Bataineh
Abstract:
The 21st century higher education and globalization challenge new faculty members to build effective professional networks and partnership with industry in order to accelerate their growth and success. This creates the need for community of practice (CoP)-oriented development approaches that focus on cognitive apprenticeship while considering individual predisposition and future career needs. This work adopts data mining, clustering analysis, and social networking technologies to present the CoP-Network as a virtual space that connects together similar career-aspiration individuals who are socially influenced to join and engage in a process for domain-related knowledge and practice acquisitions. The CoP-Network model can be integrated into higher education to extend traditional graduate and professional development programs.Keywords: clustering analysis, community of practice, data mining, higher education, new faculty challenges, social network, social influence, professional development
Procedia PDF Downloads 18240802 Application of Stochastic Models to Annual Extreme Streamflow Data
Authors: Karim Hamidi Machekposhti, Hossein Sedghi
Abstract:
This study was designed to find the best stochastic model (using of time series analysis) for annual extreme streamflow (peak and maximum streamflow) of Karkheh River at Iran. The Auto-regressive Integrated Moving Average (ARIMA) model used to simulate these series and forecast those in future. For the analysis, annual extreme streamflow data of Jelogir Majin station (above of Karkheh dam reservoir) for the years 1958–2005 were used. A visual inspection of the time plot gives a little increasing trend; therefore, series is not stationary. The stationarity observed in Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF) plots of annual extreme streamflow was removed using first order differencing (d=1) in order to the development of the ARIMA model. Interestingly, the ARIMA(4,1,1) model developed was found to be most suitable for simulating annual extreme streamflow for Karkheh River. The model was found to be appropriate to forecast ten years of annual extreme streamflow and assist decision makers to establish priorities for water demand. The Statistical Analysis System (SAS) and Statistical Package for the Social Sciences (SPSS) codes were used to determinate of the best model for this series.Keywords: stochastic models, ARIMA, extreme streamflow, Karkheh river
Procedia PDF Downloads 14640801 Statistical Models and Time Series Forecasting on Crime Data in Nepal
Authors: Dila Ram Bhandari
Abstract:
Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.Keywords: time series analysis, forecasting, ARIMA, machine learning
Procedia PDF Downloads 16440800 Social Norms around Adolescent Girls’ Marriage Practices in Ethiopia: A Qualitative Exploration
Authors: Dagmawit Tewahido
Abstract:
Purpose: This qualitative study was conducted to explore social norms around adolescent girls’ marriage practices in West Hararghe, Ethiopia, where early marriage is prohibited by law. Methods: Twenty Focus Group Discussions were conducted with Married and Unmarried adolescent girls, adolescent boys and parents of girls using locally developed vignettes. A total of 32 in-depth interviews were conducted with married and unmarried adolescent girls, husbands of adolescent girls and mothers-in-law. Key informant interviews were conducted with 36 district officials. Data analysis was assisted by Open Code computer software. The Social Norms Analysis Plot (SNAP) framework developed by CARE guided the development and analysis of vignettes. A thematic data analysis approach was utilized to summarize the data. Results: Early marriage is seen as a positive phenomenon in our study context, and girls who are not married by the perceived ideal age of 15 are socially sanctioned. They are particularly influenced by their peers to marry. Marrying early is considered a chance given by God and a symbol of good luck. The two common types of marriage are decided: 1) by adolescent girl and boy themselves without seeking parental permission (’Jalaa-deemaa’- meaning ‘to go along’), and 2) by just informing girl’s parents (‘Cabsaa’- meaning ‘to break the culture’). Relatives and marriage brokers also arrange early marriages. Girls usually accept the first marriage proposal regardless of their age. Parents generally tend not to oppose marriage arrangements chosen by their daughters. Conclusions: In the study context social norms encourage early marriage despite the existence of a law prohibiting marriage before the age of eighteen years. Early marriage commonly happens through consensual arrangements between adolescent girls and boys. Interventions to reduce early marriage need to consider the influence of Reference Groups on the decision makers for marriages, especially girls’ own peers.Keywords: adolescent girls, social norms, early marriage, Ethiopia
Procedia PDF Downloads 13840799 A Psycholinguistic Analysis of John Nash’s Hallucinations as Represented in the Film “A Beautiful Mind”
Authors: Rizkia Shafarini
Abstract:
The film A Beautiful Mind explores hallucination in this study. A Beautiful Mind depicts the tale of John Nash, a university student who dislikes studying in class or prefers to study alone. Throughout his life, John Nash has hallucinated, or what is known as schizophrenia, as depicted in the film A Beautiful Mind. The goal of this study was to figure out what hallucinations were, what caused them, and how John Nash managed his hallucinations. In general, this study examines the link between language and mind, or the linguistic relationship portrayed in John Nash's character's speech, as evidenced by his conduct. This study takes a psycholinguistic approach to data analysis by employing qualitative methodologies. Data sources include talks and scenes from the film A Beautiful Mind. Hearing, seeing, and feeling are the scientific results of John Nash's hallucinations in the film A Beautiful Mind. Second, dreams, aspirations, and sickness are the sources of John Nash's hallucinations. Third, John Nash's method of managing hallucinations is to see a doctor without medical or distracting assistance.Keywords: A Beautiful Mind, hallucination, psycholinguistic, John Nash
Procedia PDF Downloads 16540798 A Data-Driven Monitoring Technique Using Combined Anomaly Detectors
Authors: Fouzi Harrou, Ying Sun, Sofiane Khadraoui
Abstract:
Anomaly detection based on Principal Component Analysis (PCA) was studied intensively and largely applied to multivariate processes with highly cross-correlated process variables. Monitoring metrics such as the Hotelling's T2 and the Q statistics are usually used in PCA-based monitoring to elucidate the pattern variations in the principal and residual subspaces, respectively. However, these metrics are ill suited to detect small faults. In this paper, the Exponentially Weighted Moving Average (EWMA) based on the Q and T statistics, T2-EWMA and Q-EWMA, were developed for detecting faults in the process mean. The performance of the proposed methods was compared with that of the conventional PCA-based fault detection method using synthetic data. The results clearly show the benefit and the effectiveness of the proposed methods over the conventional PCA method, especially for detecting small faults in highly correlated multivariate data.Keywords: data-driven method, process control, anomaly detection, dimensionality reduction
Procedia PDF Downloads 29840797 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium
Authors: Janne Engblom, Elias Oikarinen
Abstract:
The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.Keywords: dynamic model, panel data, cross-sectional dependence, interaction model
Procedia PDF Downloads 25140796 Use of Artificial Intelligence in Teaching Practices: A Meta-Analysis
Authors: Azmat Farooq Ahmad Khurram, Sadaf Aslam
Abstract:
This meta-analysis systematically examines the use of artificial intelligence (AI) in instructional methods across diverse educational settings through a thorough analysis of empirical research encompassing various disciplines, educational levels, and regions. This study aims to assess the effects of AI integration on teaching methodologies, classroom dynamics, teachers' roles, and student engagement. Various research methods were used to gather data, including literature reviews, surveys, interviews, and focus group discussions. Findings indicate paradigm shifts in teaching and education, identify emerging trends, practices, and the application of artificial intelligence in learning, and provide educators, policymakers, and stakeholders with guidelines and recommendations for effectively integrating AI in educational contexts. The study concludes by suggesting future research directions and practical considerations for maximizing AI's positive influence on pedagogical practices.Keywords: artificial intelligence, teaching practices, meta-analysis, teaching-learning
Procedia PDF Downloads 7440795 FLEX: A Backdoor Detection and Elimination Method in Federated Scenario
Authors: Shuqi Zhang
Abstract:
Federated learning allows users to participate in collaborative model training without sending data to third-party servers, reducing the risk of user data privacy leakage, and is widely used in smart finance and smart healthcare. However, the distributed architecture design of federation learning itself and the existence of secure aggregation protocols make it inherently vulnerable to backdoor attacks. To solve this problem, the federated learning backdoor defense framework FLEX based on group aggregation, cluster analysis, and neuron pruning is proposed, and inter-compatibility with secure aggregation protocols is achieved. The good performance of FLEX is verified by building a horizontal federated learning framework on the CIFAR-10 dataset for experiments, which achieves 98% success rate of backdoor detection and reduces the success rate of backdoor tasks to 0% ~ 10%.Keywords: federated learning, secure aggregation, backdoor attack, cluster analysis, neuron pruning
Procedia PDF Downloads 9240794 The Effect of Group Counseling Program on 9th Grade Students' Assertiveness Levels
Authors: Ismail Seçer, Kerime Meryem Dereli̇oğlu
Abstract:
This study is conducted to determine the effects of group counseling program on secondary school 9th grade students’ assertiveness skills. The study group was formed of 100 students who have received education in Erzurum Kültür Elementary School in 2015-2016 education years. RAE-Rathus Assertiveness Schedule developed by Voltan Acar was applied on this group to gather data. 40 students who got lower grades from the inventory were divided randomly into experimental and control groups. Each group is formed of 20 students. Group counseling program was carried out on the experimental group to improve the students’ assertiveness skills for 8 weeks. Single-way and two-way analysis of covariance (ANCOVA) were used in the analysis of the data. The data was analyzed by using the SPSS 19.00. The results of the study show that assertiveness skills of the students who participate in the group counseling program increased meaningfully compared to the control group and pre-experiment. Besides, it was determined that the change observed in the experimental group occurred separately from the age and socio-economic level variables, and it was determined with the monitoring test applied after four months that this affect was continued. According to this result, it can be said that the applied group counseling program is an effective means to improve the assertiveness skills of secondary school students.Keywords: high school, assertiveness, assertiveness inventory, assertiveness education
Procedia PDF Downloads 24540793 A Proposal to Tackle Security Challenges of Distributed Systems in the Healthcare Sector
Authors: Ang Chia Hong, Julian Khoo Xubin, Burra Venkata Durga Kumar
Abstract:
Distributed systems offer many benefits to the healthcare industry. From big data analysis to business intelligence, the increased computational power and efficiency from distributed systems serve as an invaluable resource in the healthcare sector to utilize. However, as the usage of these distributed systems increases, many issues arise. The main focus of this paper will be on security issues. Many security issues stem from distributed systems in the healthcare industry, particularly information security. The data of people is especially sensitive in the healthcare industry. If important information gets leaked (Eg. IC, credit card number, address, etc.), a person’s identity, financial status, and safety might get compromised. This results in the responsible organization losing a lot of money in compensating these people and even more resources expended trying to fix the fault. Therefore, a framework for a blockchain-based healthcare data management system for healthcare was proposed. In this framework, the usage of a blockchain network is explored to store the encryption key of the patient’s data. As for the actual data, it is encrypted and its encrypted data, called ciphertext, is stored in a cloud storage platform. Furthermore, there are some issues that have to be emphasized and tackled for future improvements, such as a multi-user scheme that could be proposed, authentication issues that have to be tackled or migrating the backend processes into the blockchain network. Due to the nature of blockchain technology, the data will be tamper-proof, and its read-only function can only be accessed by authorized users such as doctors and nurses. This guarantees the confidentiality and immutability of the patient’s data.Keywords: distributed, healthcare, efficiency, security, blockchain, confidentiality and immutability
Procedia PDF Downloads 18340792 Identifying Critical Success Factors for Data Quality Management through a Delphi Study
Authors: Maria Paula Santos, Ana Lucas
Abstract:
Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.Keywords: critical success factors, data quality, data quality management, Delphi, Q-Sort
Procedia PDF Downloads 21640791 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories
Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos
Abstract:
Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.Keywords: database, forensic genetics, genetic analysis, sample management, software solution
Procedia PDF Downloads 36940790 Stress Concentration Trend for Combined Loading Conditions
Authors: Aderet M. Pantierer, Shmuel Pantierer, Raphael Cordina, Yougashwar Budhoo
Abstract:
Stress concentration occurs when there is an abrupt change in geometry, a mechanical part under loading. These changes in geometry can include holes, notches, or cracks within the component. The modifications create larger stress within the part. This maximum stress is difficult to determine, as it is directly at the point of the minimum area. Strain gauges have yet to be developed to analyze stresses at such minute areas. Therefore, a stress concentration factor must be utilized. The stress concentration factor is a dimensionless parameter calculated solely on the geometry of a part. The factor is multiplied by the nominal, or average, stress of the component, which can be found analytically or experimentally. Stress concentration graphs exist for common loading conditions and geometrical configurations to aid in the determination of the maximum stress a part can withstand. These graphs were developed from historical data yielded from experimentation. This project seeks to verify a stress concentration graph for combined loading conditions. The aforementioned graph was developed using CATIA Finite Element Analysis software. The results of this analysis will be validated through further testing. The 3D modeled parts will be subjected to further finite element analysis using Patran-Nastran software. The finite element models will then be verified by testing physical specimen using a tensile testing machine. Once the data is validated, the unique stress concentration graph will be submitted for publication so it can aid engineers in future projects.Keywords: stress concentration, finite element analysis, finite element models, combined loading
Procedia PDF Downloads 44140789 Modified InVEST for Whatsapp Messages Forensic Triage and Search through Visualization
Authors: Agria Rhamdhan
Abstract:
WhatsApp as the most popular mobile messaging app has been used as evidence in many criminal cases. As the use of mobile messages generates large amounts of data, forensic investigation faces the challenge of large data problems. The hardest part of finding this important evidence is because current practice utilizes tools and technique that require manual analysis to check all messages. That way, analyze large sets of mobile messaging data will take a lot of time and effort. Our work offers methodologies based on forensic triage to reduce large data to manageable sets resulting easier to do detailed reviews, then show the results through interactive visualization to show important term, entities and relationship through intelligent ranking using Term Frequency-Inverse Document Frequency (TF-IDF) and Latent Dirichlet Allocation (LDA) Model. By implementing this methodology, investigators can improve investigation processing time and result's accuracy.Keywords: forensics, triage, visualization, WhatsApp
Procedia PDF Downloads 16640788 Land Use Dynamics of Ikere Forest Reserve, Nigeria Using Geographic Information System
Authors: Akintunde Alo
Abstract:
The incessant encroachments into the forest ecosystem by the farmers and local contractors constitute a major threat to the conservation of genetic resources and biodiversity in Nigeria. To propose a viable monitoring system, this study employed Geographic Information System (GIS) technology to assess the changes that occurred for a period of five years (between 2011 and 2016) in Ikere forest reserve. Landsat imagery of the forest reserve was obtained. For the purpose of geo-referencing the acquired satellite imagery, ground-truth coordinates of some benchmark places within the forest reserve was relied on. Supervised classification algorithm, image processing, vectorization and map production were realized using ArcGIS. Various land use systems within the forest ecosystem were digitized into polygons of different types and colours for 2011 and 2016, roads were represented with lines of different thickness and colours. Of the six land-use delineated, the grassland increased from 26.50 % in 2011 to 45.53% in 2016 of the total land area with a percentage change of 71.81 %. Plantations of Gmelina arborea and Tectona grandis on the other hand reduced from 62.16 % in 2011 to 27.41% in 2016. The farmland and degraded land recorded percentage change of about 176.80 % and 8.70 % respectively from 2011 to 2016. Overall, the rate of deforestation in the study area is on the increase and becoming severe. About 72.59% of the total land area has been converted to non-forestry uses while the remnant 27.41% is occupied by plantations of Gmelina arborea and Tectona grandis. Interestingly, over 55 % of the plantation area in 2011 has changed to grassland, or converted to farmland and degraded land in 2016. The rate of change over time was about 9.79 % annually. Based on the results, rapid actions to prevail on the encroachers to stop deforestation and encouraged re-afforestation in the study area are recommended.Keywords: land use change, forest reserve, satellite imagery, geographical information system
Procedia PDF Downloads 35540787 Performance Evaluation of Al Jame’s Roundabout Using SIDRA
Authors: D. Muley, H. S. Al-Mandhari
Abstract:
This paper evaluates the performance of a multi-lane four-legged modern roundabout operating in Muscat using SIDRA model. The performance measures include Degree of Saturation (DOS), average delay, and queue lengths. The geometric and traffic data were used for model preparation. Gap acceptance parameters, critical gap, and follow-up headway were used for calibration of SIDRA model. The results from the analysis showed that currently the roundabout is experiencing delays up to 610 seconds with DOS 1.67 during peak hour. Further, sensitivity analysis for general and roundabout parameters was performed, amongst lane width, cruise speed, inscribed diameter, entry radius, and entry angle showed that inscribed diameter is the most crucial factor affecting delay and DOS. Upgradation of the roundabout to the fully signalized junction was found as the suitable solution which will serve for future years with LOS C for design year having DOS of 0.9 with average control delay of 51.9 seconds per vehicle.Keywords: performance analysis, roundabout, sensitivity analysis, SIDRA
Procedia PDF Downloads 381