Search results for: data envelopment analysis (DEA)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 42282

Search results for: data envelopment analysis (DEA)

41142 A Narrative of Nationalism in Mainstream Media: The US, China, and COVID-19

Authors: Rachel Williams, Shiqi Yang

Abstract:

Our research explores the influence nationalism has had on media coverage of the COVID-19 pandemic as it relates to China in the United States through an inclusive qualitative analysis of two US news networks, Fox News and CNN. In total, the transcripts of sixteen videos uploaded on YouTube, each with more than 100,000 views, were gathered for data processing. Co-occurrence networks generated by KH Coder illuminate the themes and narratives underpinning the reports from Fox News and CNN. The results of in-depth content analysis with keywords suggest that the pandemic has been framed in an ethnopopulist nationalist manner, although to varying degrees between networks. Specifically, the authors found that Fox News is more likely to report hypotheses or statements as a fact; on the contrary, CNN is more likely to quote data and statements from official institutions. Future research into how nationalist narratives have developed in China and in other US news coverage with a more systematic and quantitative method can be conducted to expand on these findings.

Keywords: nationalism, media studies, us and china, COVID-19, social media, communication studies

Procedia PDF Downloads 59
41141 Association Rules Mining and NOSQL Oriented Document in Big Data

Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub

Abstract:

Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.

Keywords: Apriori, Association rules mining, Big Data, Data Mining, Hadoop, MapReduce, MongoDB, NoSQL

Procedia PDF Downloads 163
41140 Sentiment Analysis on the East Timor Accession Process to the ASEAN

Authors: Marcelino Caetano Noronha, Vosco Pereira, Jose Soares Pinto, Ferdinando Da C. Saores

Abstract:

One particularly popular social media platform is Youtube. It’s a video-sharing platform where users can submit videos, and other users can like, dislike or comment on the videos. In this study, we conduct a binary classification task on YouTube’s video comments and review from the users regarding the accession process of Timor Leste to become the eleventh member of the Association of South East Asian Nations (ASEAN). We scrape the data directly from the public YouTube video and apply several pre-processing and weighting techniques. Before conducting the classification, we categorized the data into two classes, namely positive and negative. In the classification part, we apply Support Vector Machine (SVM) algorithm. By comparing with Naïve Bayes Algorithm, the experiment showed SVM achieved 84.1% of Accuracy, 94.5% of Precision, and Recall 73.8% simultaneously.

Keywords: classification, YouTube, sentiment analysis, support sector machine

Procedia PDF Downloads 110
41139 Immunization-Data-Quality in Public Health Facilities in the Pastoralist Communities: A Comparative Study Evidence from Afar and Somali Regional States, Ethiopia

Authors: Melaku Tsehay

Abstract:

The Consortium of Christian Relief and Development Associations (CCRDA), and the CORE Group Polio Partners (CGPP) Secretariat have been working with Global Alliance for Vac-cines and Immunization (GAVI) to improve the immunization data quality in Afar and Somali Regional States. The main aim of this study was to compare the quality of immunization data before and after the above interventions in health facilities in the pastoralist communities in Ethiopia. To this end, a comparative-cross-sectional study was conducted on 51 health facilities. The baseline data was collected in May 2019, while the end line data in August 2021. The WHO data quality self-assessment tool (DQS) was used to collect data. A significant improvment was seen in the accuracy of the pentavalent vaccine (PT)1 (p = 0.012) data at the health posts (HP), while PT3 (p = 0.010), and Measles (p = 0.020) at the health centers (HC). Besides, a highly sig-nificant improvment was observed in the accuracy of tetanus toxoid (TT)2 data at HP (p < 0.001). The level of over- or under-reporting was found to be < 8%, at the HP, and < 10% at the HC for PT3. The data completeness was also increased from 72.09% to 88.89% at the HC. Nearly 74% of the health facilities timely reported their respective immunization data, which is much better than the baseline (7.1%) (p < 0.001). These findings may provide some hints for the policies and pro-grams targetting on improving immunization data qaulity in the pastoralist communities.

Keywords: data quality, immunization, verification factor, pastoralist region

Procedia PDF Downloads 125
41138 Behavioral Analysis of Stock Using Selective Indicators from Fundamental and Technical Analysis

Authors: Vish Putcha, Chandrasekhar Putcha, Siva Hari

Abstract:

In the current digital era of free trading and pandemic-driven remote work culture, markets worldwide gained momentum for retail investors to trade from anywhere easily. The number of retail traders rose to 24% of the market from 15% at the pre-pandemic level. Most of them are young retail traders with high-risk tolerance compared to the previous generation of retail traders. This trend boosted the growth of subscription-based market predictors and market data vendors. Young traders are betting on these predictors, assuming one of them is correct. However, 90% of retail traders are on the losing end. This paper presents multiple indicators and attempts to derive behavioral patterns from the underlying stocks. The two major indicators that traders and investors follow are technical and fundamental. The famous investor, Warren Buffett, adheres to the “Value Investing” method that is based on a stock’s fundamental Analysis. In this paper, we present multiple indicators from various methods to understand the behavior patterns of stocks. For this research, we picked five stocks with a market capitalization of more than $200M, listed on the exchange for more than 20 years, and from different industry sectors. To study the behavioral pattern over time for these five stocks, a total of 8 indicators are chosen from fundamental, technical, and financial indicators, such as Price to Earning (P/E), Price to Book Value (P/B), Debt to Equity (D/E), Beta, Volatility, Relative Strength Index (RSI), Moving Averages and Dividend yields, followed by detailed mathematical Analysis. This is an interdisciplinary paper between various disciplines of Engineering, Accounting, and Finance. The research takes a new approach to identify clear indicators affecting stocks. Statistical Analysis of the data will be performed in terms of the probabilistic distribution, then follow and then determine the probability of the stock price going over a specific target value. The Chi-square test will be used to determine the validity of the assumed distribution. Preliminary results indicate that this approach is working well. When the complete results are presented in the final paper, they will be beneficial to the community.

Keywords: stock pattern, stock market analysis, stock predictions, trading, investing, fundamental analysis, technical analysis, quantitative trading, financial analysis, behavioral analysis

Procedia PDF Downloads 87
41137 Investigating Breakdowns in Human Robot Interaction: A Conversation Analysis Guided Single Case Study of a Human-Robot Communication in a Museum Environment

Authors: B. Arend, P. Sunnen, P. Caire

Abstract:

In a single case study, we show how a conversation analysis (CA) approach can shed light onto the sequential unfolding of human-robot interaction. Relying on video data, we are able to show that CA allows us to investigate the respective turn-taking systems of humans and a NAO robot in their dialogical dynamics, thus pointing out relevant differences. Our fine grained video analysis points out occurring breakdowns and their overcoming, when humans and a NAO-robot engage in a multimodally uttered multi-party communication during a sports guessing game. Our findings suggest that interdisciplinary work opens up the opportunity to gain new insights into the challenging issues of human robot communication in order to provide resources for developing mechanisms that enable complex human-robot interaction (HRI).

Keywords: human robot interaction, conversation analysis, dialogism, breakdown, museum

Procedia PDF Downloads 306
41136 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence

Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti

Abstract:

In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.

Keywords: collective intelligence, data framework, destination management, smart tourism

Procedia PDF Downloads 122
41135 Tourism Satellite Account: Approach and Information System Development

Authors: Pappas Theodoros, Mihail Diakomihalis

Abstract:

Measuring the economic impact of tourism in a benchmark economy is a global concern, with previous measurements being partial and not fully integrated. Tourism is a phenomenon that requires individual consumption of visitors and which should be observed and measured to reveal, thus, the overall contribution of tourism to an economy. The Tourism Satellite Account (TSA) is a critical tool for assessing the annual growth of tourism, providing reliable measurements. This article introduces a system of TSA information that encompasses all the works of the TSA, including input, storage, management, and analysis of data, as well as additional future functions and enhances the efficiency of tourism data management and TSA collection utility. The methodology and results presented offer insights into the development and implementation of TSA.

Keywords: tourism satellite account, information system, data-based tourist account, relation database

Procedia PDF Downloads 88
41134 An Epidemiological Analysis of the Occurrence of Bovine Brucellosis and Adopted Control Measures in South Africa during the Period 2014 to 2019

Authors: Emily Simango, T. Chitura

Abstract:

Background: Bovine brucellosis is among the most neglected zoonotic diseases in developing countries, where it is endemic and a growing challenge to public health. The development of cost-effective control measures for the disease can only be affirmed by the knowledge of the disease epidemiology and the ability to define its risk profiles. The aim of the study was to document the trend of bovine brucellosis and the control measures adopted following reported cases during the period 2014 to 2019 in South Africa. Methods: Data on confirmed cases of bovine brucellosis was retrieved from the website of the World Organisation of Animal Health (WOAH). Data was analysed using the Statistical Package for Social Sciences (IBM SPSS, 2022) version 29.0. Descriptive analysis (frequencies and percentages) and the Analysis of variance (ANOVA) were utilized for statistical significance (p<0.05). Results: The data retrieved in our study revealed an overall average bovine brucellosis prevalence of 8.48. There were statistically significant differences in bovine brucellosis prevalence across the provinces for the years 2016 and 2019 (p≥0.05), with the Eastern Cape Province having the highest prevalence in both instances. Documented control measures for the disease were limited to killing and disposal of disease cases as well as vaccination of susceptible animals. Conclusion: Bovine brucellosis is real in South Africa, with the risk profiles differing across the provinces. Information on brucellosis control measures in South Africa, as reported to the WOAH, is not comprehensive.

Keywords: zoonotic, endemic, Eastern Cape province, vaccination

Procedia PDF Downloads 68
41133 Percentile Norms of Heart Rate Variability (HRV) of Indian Sportspersons Withdrawn from Competitive Games and Sports

Authors: Pawan Kumar, Dhananjoy Shaw

Abstract:

Heart rate variability (HRV) is the physiological phenomenon of variation in the time interval between heartbeats and is alterable with fitness, age and different medical conditions including withdrawal/retirement from games/sports. Objectives of the study were to develop (a) percentile norms of heart rate variability (HRV) variables derived from time domain analysis of the Indian sportspersons withdrawn from competitive games/sports pertaining to sympathetic and parasympathetic activity (b) percentile norms of heart rate variability (HRV) variables derived from frequency domain analysis of the Indian sportspersons withdrawn from competitive games/sports pertaining to sympathetic and parasympathetic activity. The study was conducted on 430 males. Ages of the sample ranged from 30 to 35 years of same socio-economic status. Date was collected using ECG polygraphs. Data were processed and extracted using frequency domain analysis and time domain analysis. Collected data were computed with percentile from one to hundred. The finding showed that the percentile norms of heart rate variability (HRV) variables derived from time domain analysis of the Indian sportspersons withdrawn from competitive games/sports pertaining to sympathetic and parasympathetic activity namely, NN50 count (ranged from 1 to 189 score as percentile range). pNN50 count (ranged from .24 to 60.80 score as percentile range). SDNN (ranged from 17.34 to 167.29 score as percentile range). SDSD (ranged from 11.14 to 120.46 score as percentile range). RMMSD (ranged from 11.19 to 120.24 score as percentile range) and SDANN (ranged from 4.02 to 88.75 score as percentile range). The percentile norms of heart rate variability (HRV) variables derived from frequency domain analysis of the Indian sportspersons withdrawn from competitive games/sports pertaining to sympathetic and parasympathetic activity namely Low Frequency (Normalized Power) ranged from 20.68 to 90.49 score as percentile range. High Frequency (Normalized Power) ranged from 14.37 to 81.60 score as percentile range. LF/ HF ratio(ranged from 0.26 to 9.52 score as percentile range). LF (Absolute Power) ranged from 146.79 to 5669.33 score as percentile range. HF (Absolute Power) ranged from 102.85 to 10735.71 score as percentile range and Total Power (Absolute Power) ranged from 471.45 to 25879.23 score as percentile range. Conclusion: The analysis documented percentile norms for time domain analysis and frequency domain analysis for versatile use and evaluation.

Keywords: RMSSD, Percentile, SDANN, HF, LF

Procedia PDF Downloads 420
41132 CoP-Networks: Virtual Spaces for New Faculty’s Professional Development in the 21st Higher Education

Authors: Eman AbuKhousa, Marwan Z. Bataineh

Abstract:

The 21st century higher education and globalization challenge new faculty members to build effective professional networks and partnership with industry in order to accelerate their growth and success. This creates the need for community of practice (CoP)-oriented development approaches that focus on cognitive apprenticeship while considering individual predisposition and future career needs. This work adopts data mining, clustering analysis, and social networking technologies to present the CoP-Network as a virtual space that connects together similar career-aspiration individuals who are socially influenced to join and engage in a process for domain-related knowledge and practice acquisitions. The CoP-Network model can be integrated into higher education to extend traditional graduate and professional development programs.

Keywords: clustering analysis, community of practice, data mining, higher education, new faculty challenges, social network, social influence, professional development

Procedia PDF Downloads 184
41131 A Data-Driven Monitoring Technique Using Combined Anomaly Detectors

Authors: Fouzi Harrou, Ying Sun, Sofiane Khadraoui

Abstract:

Anomaly detection based on Principal Component Analysis (PCA) was studied intensively and largely applied to multivariate processes with highly cross-correlated process variables. Monitoring metrics such as the Hotelling's T2 and the Q statistics are usually used in PCA-based monitoring to elucidate the pattern variations in the principal and residual subspaces, respectively. However, these metrics are ill suited to detect small faults. In this paper, the Exponentially Weighted Moving Average (EWMA) based on the Q and T statistics, T2-EWMA and Q-EWMA, were developed for detecting faults in the process mean. The performance of the proposed methods was compared with that of the conventional PCA-based fault detection method using synthetic data. The results clearly show the benefit and the effectiveness of the proposed methods over the conventional PCA method, especially for detecting small faults in highly correlated multivariate data.

Keywords: data-driven method, process control, anomaly detection, dimensionality reduction

Procedia PDF Downloads 299
41130 An Analysis of the Need of Training for Indian Textile Manufacturing Sector

Authors: Shipra Sharma, Jagat Jerath

Abstract:

Human resource training is an essential element of talent management in the current era of global competitiveness and dynamic trade in the manufacturing industry. Globally, India is behind only China as the largest textile manufacturer. The major challenges faced by the Indian textile manufacturing Industry are low technology levels, growing skill gaps, unorganized structure, lower efficiencies, etc. indicating the need for constant talent up-gradation. Assessment of training needs from a strategic perspective is an essential step for the formulation of effective training. The paper established the significance of training in the Indian textile industry and to determine the training needs on various parameters as presented. 40 HR personnel/s working in the textile and apparel companies based in the industrial region of Punjab, India, were the respondents for the study. The research tool used in this case was a structured questionnaire as per five-point Likert scale. Statistical analysis through descriptive statistics and chi-square test indicated the increased need for training whenever there were technical changes in the organizations. As per the data presented in this study, most of the HR personnel/s agreed that the variables associated with organizational analysis, task analysis, and individual analysis have a statistically significant role to play in determining the need for training in an organization.

Keywords: Indian textile manufacturing industry, significance of training, training needs analysis, parameters for training needs assessment

Procedia PDF Downloads 166
41129 Application of Stochastic Models to Annual Extreme Streamflow Data

Authors: Karim Hamidi Machekposhti, Hossein Sedghi

Abstract:

This study was designed to find the best stochastic model (using of time series analysis) for annual extreme streamflow (peak and maximum streamflow) of Karkheh River at Iran. The Auto-regressive Integrated Moving Average (ARIMA) model used to simulate these series and forecast those in future. For the analysis, annual extreme streamflow data of Jelogir Majin station (above of Karkheh dam reservoir) for the years 1958–2005 were used. A visual inspection of the time plot gives a little increasing trend; therefore, series is not stationary. The stationarity observed in Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF) plots of annual extreme streamflow was removed using first order differencing (d=1) in order to the development of the ARIMA model. Interestingly, the ARIMA(4,1,1) model developed was found to be most suitable for simulating annual extreme streamflow for Karkheh River. The model was found to be appropriate to forecast ten years of annual extreme streamflow and assist decision makers to establish priorities for water demand. The Statistical Analysis System (SAS) and Statistical Package for the Social Sciences (SPSS) codes were used to determinate of the best model for this series.

Keywords: stochastic models, ARIMA, extreme streamflow, Karkheh river

Procedia PDF Downloads 148
41128 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 166
41127 Identifying Critical Success Factors for Data Quality Management through a Delphi Study

Authors: Maria Paula Santos, Ana Lucas

Abstract:

Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.

Keywords: critical success factors, data quality, data quality management, Delphi, Q-Sort

Procedia PDF Downloads 218
41126 A Psycholinguistic Analysis of John Nash’s Hallucinations as Represented in the Film “A Beautiful Mind”

Authors: Rizkia Shafarini

Abstract:

The film A Beautiful Mind explores hallucination in this study. A Beautiful Mind depicts the tale of John Nash, a university student who dislikes studying in class or prefers to study alone. Throughout his life, John Nash has hallucinated, or what is known as schizophrenia, as depicted in the film A Beautiful Mind. The goal of this study was to figure out what hallucinations were, what caused them, and how John Nash managed his hallucinations. In general, this study examines the link between language and mind, or the linguistic relationship portrayed in John Nash's character's speech, as evidenced by his conduct. This study takes a psycholinguistic approach to data analysis by employing qualitative methodologies. Data sources include talks and scenes from the film A Beautiful Mind. Hearing, seeing, and feeling are the scientific results of John Nash's hallucinations in the film A Beautiful Mind. Second, dreams, aspirations, and sickness are the sources of John Nash's hallucinations. Third, John Nash's method of managing hallucinations is to see a doctor without medical or distracting assistance.

Keywords: A Beautiful Mind, hallucination, psycholinguistic, John Nash

Procedia PDF Downloads 175
41125 Social Norms around Adolescent Girls’ Marriage Practices in Ethiopia: A Qualitative Exploration

Authors: Dagmawit Tewahido

Abstract:

Purpose: This qualitative study was conducted to explore social norms around adolescent girls’ marriage practices in West Hararghe, Ethiopia, where early marriage is prohibited by law. Methods: Twenty Focus Group Discussions were conducted with Married and Unmarried adolescent girls, adolescent boys and parents of girls using locally developed vignettes. A total of 32 in-depth interviews were conducted with married and unmarried adolescent girls, husbands of adolescent girls and mothers-in-law. Key informant interviews were conducted with 36 district officials. Data analysis was assisted by Open Code computer software. The Social Norms Analysis Plot (SNAP) framework developed by CARE guided the development and analysis of vignettes. A thematic data analysis approach was utilized to summarize the data. Results: Early marriage is seen as a positive phenomenon in our study context, and girls who are not married by the perceived ideal age of 15 are socially sanctioned. They are particularly influenced by their peers to marry. Marrying early is considered a chance given by God and a symbol of good luck. The two common types of marriage are decided: 1) by adolescent girl and boy themselves without seeking parental permission (’Jalaa-deemaa’- meaning ‘to go along’), and 2) by just informing girl’s parents (‘Cabsaa’- meaning ‘to break the culture’). Relatives and marriage brokers also arrange early marriages. Girls usually accept the first marriage proposal regardless of their age. Parents generally tend not to oppose marriage arrangements chosen by their daughters. Conclusions: In the study context social norms encourage early marriage despite the existence of a law prohibiting marriage before the age of eighteen years. Early marriage commonly happens through consensual arrangements between adolescent girls and boys. Interventions to reduce early marriage need to consider the influence of Reference Groups on the decision makers for marriages, especially girls’ own peers.

Keywords: adolescent girls, social norms, early marriage, Ethiopia

Procedia PDF Downloads 141
41124 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium

Authors: Janne Engblom, Elias Oikarinen

Abstract:

The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.

Keywords: dynamic model, panel data, cross-sectional dependence, interaction model

Procedia PDF Downloads 252
41123 A Proposal to Tackle Security Challenges of Distributed Systems in the Healthcare Sector

Authors: Ang Chia Hong, Julian Khoo Xubin, Burra Venkata Durga Kumar

Abstract:

Distributed systems offer many benefits to the healthcare industry. From big data analysis to business intelligence, the increased computational power and efficiency from distributed systems serve as an invaluable resource in the healthcare sector to utilize. However, as the usage of these distributed systems increases, many issues arise. The main focus of this paper will be on security issues. Many security issues stem from distributed systems in the healthcare industry, particularly information security. The data of people is especially sensitive in the healthcare industry. If important information gets leaked (Eg. IC, credit card number, address, etc.), a person’s identity, financial status, and safety might get compromised. This results in the responsible organization losing a lot of money in compensating these people and even more resources expended trying to fix the fault. Therefore, a framework for a blockchain-based healthcare data management system for healthcare was proposed. In this framework, the usage of a blockchain network is explored to store the encryption key of the patient’s data. As for the actual data, it is encrypted and its encrypted data, called ciphertext, is stored in a cloud storage platform. Furthermore, there are some issues that have to be emphasized and tackled for future improvements, such as a multi-user scheme that could be proposed, authentication issues that have to be tackled or migrating the backend processes into the blockchain network. Due to the nature of blockchain technology, the data will be tamper-proof, and its read-only function can only be accessed by authorized users such as doctors and nurses. This guarantees the confidentiality and immutability of the patient’s data.

Keywords: distributed, healthcare, efficiency, security, blockchain, confidentiality and immutability

Procedia PDF Downloads 185
41122 The Effect of Group Counseling Program on 9th Grade Students' Assertiveness Levels

Authors: Ismail Seçer, Kerime Meryem Dereli̇oğlu

Abstract:

This study is conducted to determine the effects of group counseling program on secondary school 9th grade students’ assertiveness skills. The study group was formed of 100 students who have received education in Erzurum Kültür Elementary School in 2015-2016 education years. RAE-Rathus Assertiveness Schedule developed by Voltan Acar was applied on this group to gather data. 40 students who got lower grades from the inventory were divided randomly into experimental and control groups. Each group is formed of 20 students. Group counseling program was carried out on the experimental group to improve the students’ assertiveness skills for 8 weeks. Single-way and two-way analysis of covariance (ANCOVA) were used in the analysis of the data. The data was analyzed by using the SPSS 19.00. The results of the study show that assertiveness skills of the students who participate in the group counseling program increased meaningfully compared to the control group and pre-experiment. Besides, it was determined that the change observed in the experimental group occurred separately from the age and socio-economic level variables, and it was determined with the monitoring test applied after four months that this affect was continued. According to this result, it can be said that the applied group counseling program is an effective means to improve the assertiveness skills of secondary school students.

Keywords: high school, assertiveness, assertiveness inventory, assertiveness education

Procedia PDF Downloads 246
41121 FLEX: A Backdoor Detection and Elimination Method in Federated Scenario

Authors: Shuqi Zhang

Abstract:

Federated learning allows users to participate in collaborative model training without sending data to third-party servers, reducing the risk of user data privacy leakage, and is widely used in smart finance and smart healthcare. However, the distributed architecture design of federation learning itself and the existence of secure aggregation protocols make it inherently vulnerable to backdoor attacks. To solve this problem, the federated learning backdoor defense framework FLEX based on group aggregation, cluster analysis, and neuron pruning is proposed, and inter-compatibility with secure aggregation protocols is achieved. The good performance of FLEX is verified by building a horizontal federated learning framework on the CIFAR-10 dataset for experiments, which achieves 98% success rate of backdoor detection and reduces the success rate of backdoor tasks to 0% ~ 10%.

Keywords: federated learning, secure aggregation, backdoor attack, cluster analysis, neuron pruning

Procedia PDF Downloads 96
41120 Modified InVEST for Whatsapp Messages Forensic Triage and Search through Visualization

Authors: Agria Rhamdhan

Abstract:

WhatsApp as the most popular mobile messaging app has been used as evidence in many criminal cases. As the use of mobile messages generates large amounts of data, forensic investigation faces the challenge of large data problems. The hardest part of finding this important evidence is because current practice utilizes tools and technique that require manual analysis to check all messages. That way, analyze large sets of mobile messaging data will take a lot of time and effort. Our work offers methodologies based on forensic triage to reduce large data to manageable sets resulting easier to do detailed reviews, then show the results through interactive visualization to show important term, entities and relationship through intelligent ranking using Term Frequency-Inverse Document Frequency (TF-IDF) and Latent Dirichlet Allocation (LDA) Model. By implementing this methodology, investigators can improve investigation processing time and result's accuracy.

Keywords: forensics, triage, visualization, WhatsApp

Procedia PDF Downloads 171
41119 Use of Artificial Intelligence in Teaching Practices: A Meta-Analysis

Authors: Azmat Farooq Ahmad Khurram, Sadaf Aslam

Abstract:

This meta-analysis systematically examines the use of artificial intelligence (AI) in instructional methods across diverse educational settings through a thorough analysis of empirical research encompassing various disciplines, educational levels, and regions. This study aims to assess the effects of AI integration on teaching methodologies, classroom dynamics, teachers' roles, and student engagement. Various research methods were used to gather data, including literature reviews, surveys, interviews, and focus group discussions. Findings indicate paradigm shifts in teaching and education, identify emerging trends, practices, and the application of artificial intelligence in learning, and provide educators, policymakers, and stakeholders with guidelines and recommendations for effectively integrating AI in educational contexts. The study concludes by suggesting future research directions and practical considerations for maximizing AI's positive influence on pedagogical practices.

Keywords: artificial intelligence, teaching practices, meta-analysis, teaching-learning

Procedia PDF Downloads 78
41118 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories

Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos

Abstract:

Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.

Keywords: database, forensic genetics, genetic analysis, sample management, software solution

Procedia PDF Downloads 370
41117 Stress Concentration Trend for Combined Loading Conditions

Authors: Aderet M. Pantierer, Shmuel Pantierer, Raphael Cordina, Yougashwar Budhoo

Abstract:

Stress concentration occurs when there is an abrupt change in geometry, a mechanical part under loading. These changes in geometry can include holes, notches, or cracks within the component. The modifications create larger stress within the part. This maximum stress is difficult to determine, as it is directly at the point of the minimum area. Strain gauges have yet to be developed to analyze stresses at such minute areas. Therefore, a stress concentration factor must be utilized. The stress concentration factor is a dimensionless parameter calculated solely on the geometry of a part. The factor is multiplied by the nominal, or average, stress of the component, which can be found analytically or experimentally. Stress concentration graphs exist for common loading conditions and geometrical configurations to aid in the determination of the maximum stress a part can withstand. These graphs were developed from historical data yielded from experimentation. This project seeks to verify a stress concentration graph for combined loading conditions. The aforementioned graph was developed using CATIA Finite Element Analysis software. The results of this analysis will be validated through further testing. The 3D modeled parts will be subjected to further finite element analysis using Patran-Nastran software. The finite element models will then be verified by testing physical specimen using a tensile testing machine. Once the data is validated, the unique stress concentration graph will be submitted for publication so it can aid engineers in future projects.

Keywords: stress concentration, finite element analysis, finite element models, combined loading

Procedia PDF Downloads 444
41116 Study of Components and Effective Factors on Organizational Commitment of Khoramabad Branchs Islamic Azad University’s Faculty Members

Authors: Mehry Daraei

Abstract:

The goal of this study was to survey the components and affective factors on organizational commitment of Islamic Azad university Khoramabad Baranch’s faculty members. The research method was correlation by causal modeling and data were gathered by questionnaire. Statistical society consisted of 147 faculty members in Islamic Azad University Khoramabad Branch and sample size was determined as 106 persons by Morgan’s sample table that were selected by class sampling. Correlation test, T-single group test and path analysis test were used for analysis of data. Data were analyzed by Lisrel software. The results showed that organizational corporate was the most effective element on organizational commitment and organizational corporate, experience work and organizational justice were only in direct relation with organizational commitment. Also, job security had direct and indirect effect on OC. Job security had effect on OC by gender. Gender variable had direct and indirect effect on OC. Gender had effect on OC by organizational corporate. Job opportunities out of university also had direct and indirect effect on OC, which means job opportunities had indirect effect on OC by organizational corporate.

Keywords: organization, commitment, job security, Islamic Azad University

Procedia PDF Downloads 323
41115 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 324
41114 Analyzing the Sensation of Jogja Kembali Monument (Monjali): Case Study of Yogyakarta as the Implementation of Attraction Tour

Authors: Hutomo Abdurrohman, Muhammad Latief, Waridatun Nida, Ranta Dwi Irawati

Abstract:

Yogyakarta Kembali Monument (Monjali) is one of the most popular tourist attraction in Yogyakarta. Yogyakarta is known as ‘Student City’, and Monjali is a right place to learn and explore more about Yogyakarta, especially for students in elementary and junior high school to do the study tour. Monjali is located in North Ringroad, Jongkang, Sariharjo village, Ngaglik Subdistrict, Sleman Regency, Yogyakarta. Monjali offers many historical replicas, and also the story behind them. That is about the war between Indonesia's fighter, called TNI (Indonesian national army) and the colonizer of Netherlands in Yogyakarta, on March, 1st 1949. That event could open the eyes of the whole of Indonesia, because at that time the TNI was placed by the invaders. This research is an effort to evaluate the visitor's interest in Monjali as a special tourist attraction. The substance that we use in this research is the Monjali's visitors whom up to 17 years old by taking a respondent in every 15 persons who visit Monjali, and we need 200 respondents to know the condition and facilities of Monjali. This research has been collected since January 2017 until October 2017. We do the interview and spread the questionnaire which has been tested all of its validity and reliability. This data analysis is descriptive statistic analysis by using the qualitative data, which is converted into the quantitative data, use the Linkert Scale. The result of this research shows that the interest of Monjali's visitors is higher 75,6%. Based on the result, we know that Monjali is being an attractiveness for people which always experience its improvements and the development. Monjali is the success to be a place which combines the entertainment with its education as a vision of Yogyakarta as a Student City.

Keywords: descriptive statistical analysis, Jogja Kembali monument, Linkert scale, sensation

Procedia PDF Downloads 190
41113 Seismic Interpretation and Petrophysical Evaluation of SM Field, Libya

Authors: Abdalla Abdelnabi, Yousf Abushalah

Abstract:

The G Formation is a major gas producing reservoir in the SM Field, eastern, Libya. It is called G limestone because it consists of shallow marine limestone. Well data and 3D-Seismic in conjunction with the results of a previous study were used to delineate the hydrocarbon reservoir of Middle Eocene G-Formation of SM Field area. The data include three-dimensional seismic data acquired in 2009. It covers approximately an area of 75 mi² and with more than 9 wells penetrating the reservoir. Seismic data are used to identify any stratigraphic and structural and features such as channels and faults and which may play a significant role in hydrocarbon traps. The well data are used to calculation petrophysical analysis of S field. The average porosity of the Middle Eocene G Formation is very good with porosity reaching 24% especially around well W 6. Average water saturation was calculated for each well from porosity and resistivity logs using Archie’s formula. The average water saturation for the whole well is 25%. Structural mapping of top and bottom of Middle Eocene G formation revealed the highest area in the SM field is at 4800 ft subsea around wells W4, W5, W6, and W7 and the deepest point is at 4950 ft subsea. Correlation between wells using well data and structural maps created from seismic data revealed that net thickness of G Formation range from 0 ft in the north part of the field to 235 ft in southwest and south part of the field. The gas water contact is found at 4860 ft using the resistivity log. The net isopach map using both the trapezoidal and pyramid rules are used to calculate the total bulk volume. The original gas in place and the recoverable gas were calculated volumetrically to be 890 Billion Standard Cubic Feet (BSCF) and 630 (BSCF) respectively.

Keywords: 3D seismic data, well logging, petrel, kingdom suite

Procedia PDF Downloads 151