Search results for: statistical hypothesis testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7595

Search results for: statistical hypothesis testing

7415 The Impact of Artificial Intelligence on Qualty Conrol and Quality

Authors: Mary Moner Botros Fanawel

Abstract:

Many companies use the statistical tool named as statistical quality control, and which can have a high cost for the companies interested on these statistical tools. The evaluation of the quality of products and services is an important topic, but the reduction of the cost of the implantation of the statistical quality control also has important benefits for the companies. For this reason, it is important to implement a economic design for the various steps included into the statistical quality control. In this paper, we describe some relevant aspects related to the economic design of a quality control chart for the proportion of defective items. They are very important because the suggested issues can reduce the cost of implementing a quality control chart for the proportion of defective items. Note that the main purpose of this chart is to evaluate and control the proportion of defective items of a production process.

Keywords: model predictive control, hierarchical control structure, genetic algorithm, water quality with DBPs objectives proportion, type I error, economic plan, distribution function bootstrap control limit, p-value method, out-of-control signals, p-value, quality characteristics

Procedia PDF Downloads 17
7414 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 59
7413 Testing Method of Soil Failure Pattern of Sand Type as an Effort to Minimize the Impact of the Earthquake

Authors: Luthfi Assholam Solamat

Abstract:

Nowadays many people do not know the soil failure pattern as an important part in planning the under structure caused by the loading occurs. This is because the soil is located under the foundation, so it cannot be seen directly. Based on this study, the idea occurs to do a study for testing the soil failure pattern, especially the type of sand soil under the foundation. The necessity of doing this to the design of building structures on the land which is the initial part of the foundation structure that met with waves/vibrations during an earthquake. If the underground structure is not strong it is feared the building thereon more vulnerable to the risk of building damage. This research focuses on the search of soil failure pattern, which the most applicable in the field with the loading periodic re-testing of a particular time with the help of the integrated video visual observations performed. The results could be useful for planning under the structure in an effort to try the upper structure is minimal risk of the earthquake.

Keywords: soil failure pattern, earthquake, under structure, sand soil testing method

Procedia PDF Downloads 328
7412 Combining Experiments and Surveys to Understand the Pinterest User Experience

Authors: Jolie M. Martin

Abstract:

Running experiments while logging detailed user actions has become the standard way of testing product features at Pinterest, as at many other Internet companies. While this technique offers plenty of statistical power to assess the effects of product changes on behavioral metrics, it does not often give us much insight into why users respond the way they do. By combining at-scale experiments with smaller surveys of users in each experimental condition, we have developed a unique approach for measuring the impact of our product and communication treatments on user sentiment, attitudes, and comprehension.

Keywords: experiments, methodology, surveys, user experience

Procedia PDF Downloads 290
7411 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach

Authors: Theertha Chandroth

Abstract:

This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.

Keywords: XML, JSON, data comparison, integration testing, Python, SQL

Procedia PDF Downloads 85
7410 Early Diagnosis of Alzheimer's Disease Using a Combination of Images Processing and Brain Signals

Authors: E. Irankhah, M. Zarif, E. Mazrooei Rad, K. Ghandehari

Abstract:

Alzheimer's prevalence is on the rise, and the disease comes with problems like cessation of treatment, high cost of treatment, and the lack of early detection methods. The pathology of this disease causes the formation of protein deposits in the brain of patients called plaque amyloid. Generally, the diagnosis of this disease is done by performing tests such as a cerebrospinal fluid, CT scan, MRI, and spinal cord fluid testing, or mental testing tests and eye tracing tests. In this paper, we tried to use the Medial Temporal Atrophy (MTA) method and the Leave One Out (LOO) cycle to extract the statistical properties of the three Fz, Pz, and Cz channels of ERP signals for early diagnosis of this disease. In the process of CT scan images, the accuracy of the results is 81% for the healthy person and 88% for the severe patient. After the process of ERP signaling, the accuracy of the results for a healthy person in the delta band in the Cz channel is 81% and in the alpha band the Pz channel is 90%. In the results obtained from the signal processing, the results of the severe patient in the delta band of the Cz channel were 89% and in the alpha band Pz channel 92%.

Keywords: Alzheimer's disease, image and signal processing, LOO cycle, medial temporal atrophy

Procedia PDF Downloads 158
7409 Influence of Procrastination on Academic Achievement of Students in Tertiary Institutions in Kwara State, Nigeria

Authors: Usman Tunde Saadu, Adedayo Adesokan, Raseed Adewale Hamsat

Abstract:

This study examined the influence of procrastination on the academic achievement of students in tertiary institutions in Kwara State, Nigeria. Descriptive survey was adopted for this study and the total number of 300 respondents participated in the study. Stratified and simple random sampling techniques were used to select 3 institutions and 30 departments respectively. Systematic sampling technique was used to select 10 final year students in each department. Two instruments were used to obtain data from the respondents. Procrastination Assessment Scale adapted from Solomon and Rothblum (1984) and a proforma designed by researchers to obtain students CGPA in 2013/2014 academic session. The reliability score of 0.80 was obtained for the instrument using split half method. One research question and one hypothesis were postulated for this study. Percentage was employed to answer research question while research hypothesis was tested with t-test statistical analysis at 0.05 level of significant. The findings of this study revealed that most of final year students in tertiary institutions in Kwara State procrastinated because 82.3% engaged in procrastination while 17.7% did not procrastinate. Also, the study revealed that there was a significant difference between the academic achievement of tertiary institution students who procrastinate and those who did not procrastinate (cal. t-value =2.634 < critical t-value = 1.960). Students who did not engage in act of procrastinate achieved better academically than students who engage in procrastination. Based on the findings of this study, the following recommendations were made; procrastination as a concept, should be taught at the various institutions so that students will understand what the concept is all about. Guidance and counsellor and educational psychologists should be employed at various institutions to handle students who procrastinate so that appropriate methods will be recommended so solve the problem.

Keywords: academic, achievement, procrastination, institution

Procedia PDF Downloads 410
7408 Paediatric Motor Difficulties and Internalising Problems: An Integrative Review on the Environmental Stress Hypothesis

Authors: Noah Erskine, Jaime Barratt, John Cairney

Abstract:

The current study aims to provide an in-depth analysis and extension of the Environmental Stress Hypothesis (ESH) framework, focusing on the complex interplay between poor motor skills and internalising problems like anxiety and depression. Using an integrative research review methodology, this study synthesizes findings from 38 articles, both empirical and theoretical, building upon the foundational work of the model. The hypothesis posits that poor motor skills serve as a primary stressor, leading to internalising problems through various secondary stressors. A rigorous comparison of data was conducted, considering study design, findings, and methodologies - while giving special attention to variables such as age, sex, and comorbidities. The study also enhances the ESH framework by introducing resource buffers, including optimism and familial support, as additional influencing factors. This multi-level approach yields a more nuanced and comprehensive ESH framework, highlighting the need for future studies to consider intersectional variables and how they may vary across various life stages.

Keywords: motor coordination, mental health, developmental coordination disorders, paediatric comorbidities, obesity, peer problems

Procedia PDF Downloads 44
7407 Test Suite Optimization Using an Effective Meta-Heuristic BAT Algorithm

Authors: Anuradha Chug, Sunali Gandhi

Abstract:

Regression Testing is a very expensive and time-consuming process carried out to ensure the validity of modified software. Due to the availability of insufficient resources to re-execute all the test cases in time constrained environment, efforts are going on to generate test data automatically without human efforts. Many search based techniques have been proposed to generate efficient, effective as well as optimized test data, so that the overall cost of the software testing can be minimized. The generated test data should be able to uncover all potential lapses that exist in the software or product. Inspired from the natural behavior of bat for searching her food sources, current study employed a meta-heuristic, search-based bat algorithm for optimizing the test data on the basis certain parameters without compromising their effectiveness. Mathematical functions are also applied that can effectively filter out the redundant test data. As many as 50 Java programs are used to check the effectiveness of proposed test data generation and it has been found that 86% saving in testing efforts can be achieved using bat algorithm while covering 100% of the software code for testing. Bat algorithm was found to be more efficient in terms of simplicity and flexibility when the results were compared with another nature inspired algorithms such as Firefly Algorithm (FA), Hill Climbing Algorithm (HC) and Ant Colony Optimization (ACO). The output of this study would be useful to testers as they can achieve 100% path coverage for testing with minimum number of test cases.

Keywords: regression testing, test case selection, test case prioritization, genetic algorithm, bat algorithm

Procedia PDF Downloads 337
7406 Summarizing Data Sets for Data Mining by Using Statistical Methods in Coastal Engineering

Authors: Yunus Doğan, Ahmet Durap

Abstract:

Coastal regions are the one of the most commonly used places by the natural balance and the growing population. In coastal engineering, the most valuable data is wave behaviors. The amount of this data becomes very big because of observations that take place for periods of hours, days and months. In this study, some statistical methods such as the wave spectrum analysis methods and the standard statistical methods have been used. The goal of this study is the discovery profiles of the different coast areas by using these statistical methods, and thus, obtaining an instance based data set from the big data to analysis by using data mining algorithms. In the experimental studies, the six sample data sets about the wave behaviors obtained by 20 minutes of observations from Mersin Bay in Turkey and converted to an instance based form, while different clustering techniques in data mining algorithms were used to discover similar coastal places. Moreover, this study discusses that this summarization approach can be used in other branches collecting big data such as medicine.

Keywords: clustering algorithms, coastal engineering, data mining, data summarization, statistical methods

Procedia PDF Downloads 337
7405 Investigating the Relationship between Growth, Beta and Liquidity

Authors: Zahra Amirhosseini, Mahtab Nameni

Abstract:

The aim of this study was to investigate the relationship between growth, beta, and Company's cash. We calculate cash as dependent variable and growth opportunity and beta as independent variables. This study was based on an analysis of panel data. Population of the study is the companies which listed in Tehran Stock exchange and a financial data of 215 companies during the period 2010 to 2015 have been selected as the sample through systematic sampling. The results of the first hypothesis showed there is a significant relationship between growth opportunities cash holdings. Also according to the analysis done in the second hypothesis, we determined that there is an inverse relation between company risk and cash holdings.

Keywords: growth, beta, liquidity, company

Procedia PDF Downloads 363
7404 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 125
7403 A Ground Observation Based Climatology of Winter Fog: Study over the Indo-Gangetic Plains, India

Authors: Sanjay Kumar Srivastava, Anu Rani Sharma, Kamna Sachdeva

Abstract:

Every year, fog formation over the Indo-Gangetic Plains (IGPs) of Indian region during the winter months of December and January is believed to create numerous hazards, inconvenience, and economic loss to the inhabitants of this densely populated region of Indian subcontinent. The aim of the paper is to analyze the spatial and temporal variability of winter fog over IGPs. Long term ground observations of visibility and other meteorological parameters (1971-2010) have been analyzed to understand the formation of fog phenomena and its relevance during the peak winter months of January and December over IGP of India. In order to examine the temporal variability, time series and trend analysis were carried out by using the Mann-Kendall Statistical test. Trend analysis performed by using the Mann-Kendall test, accepts the alternate hypothesis with 95% confidence level indicating that there exists a trend. Kendall tau’s statistics showed that there exists a positive correlation between time series and fog frequency. Further, the Theil and Sen’s median slope estimate showed that the magnitude of trend is positive. Magnitude is higher during January compared to December for the entire IGP except in December when it is high over the western IGP. Decade wise time series analysis revealed that there has been continuous increase in fog days. The net overall increase of 99 % was observed over IGP in last four decades. Diurnal variability and average daily persistence were computed by using descriptive statistical techniques. Geo-statistical analysis of fog was carried out to understand the spatial variability of fog. Geo-statistical analysis of fog revealed that IGP is a high fog prone zone with fog occurrence frequency of more than 66% days during the study period. Diurnal variability indicates the peak occurrence of fog is between 06:00 and 10:00 local time and average daily fog persistence extends to 5 to 7 hours during the peak winter season. The results would offer a new perspective to take proactive measures in reducing the irreparable damage that could be caused due to changing trends of fog.

Keywords: fog, climatology, Mann-Kendall test, trend analysis, spatial variability, temporal variability, visibility

Procedia PDF Downloads 214
7402 Characterization of Martensitic Stainless Steel Japanese Grade AISI 420A

Authors: T. Z. Butt, T. A. Tabish, K. Anjum, H. Hafeez

Abstract:

A study of martensitic stainless steel surgical grade AISI 420A produced in Japan was carried out in this research work. The sample was already annealed at about 898˚C. The sample were subjected to chemical analysis, hardness, tensile and metallographic tests. These tests were performed on as received annealed and heat treated samples. In the annealed condition the sample showed 0HRC. However, on tensile testing, in annealed condition the sample showed maximum elongation. The heat treatment is carried out in vacuum furnace within temperature range 980-1035°C. The quenching of samples was carried out using liquid nitrogen. After hardening, the samples were subjected to tempering, which was carried out in vacuum tempering furnace at a temperature of 220˚C. The hardened samples were subjected to hardness and tensile testing. In hardness testing, the samples showed maximum hardness values. In tensile testing the sample showed minimum elongation. The sample in annealed state showed coarse plates of martensite structure. Therefore, the studied steels can be used as biomaterials.

Keywords: biomaterials, martensitic steel, microsrtucture, tensile testing, hardening, tempering, bioinstrumentation

Procedia PDF Downloads 245
7401 Rt. Side Sleeping Position Prevents Sudden Infant Death Syndrome

Authors: Othman Salim Hussein Al-Fleesy

Abstract:

Background: Studies showed that sudden infant death syndrome (SIDS) has association with sleeping positions. Up-to-date no study explained how could they prevent it? Objectives: 1-To determine which sleeping position is certainly safe one to prevent SIDS. 2-To establish criteria for suggesting definition and making diagnosis for SIDS. 3-To discuss the controversy surrounding SUND, ALTE, NM, as compared to SIDS. Method: This literature review was built on a previous literature. Articles were obtained randomly according to their availability to the author. For the purpose of this work an easy approach was built by modeling an overview on SIDS topic after clarifying the misconception and misinterpretation of a number of controversial issues in regard to SIDS such as: asphyxia, sudden unexpected death among adults (Bangungut or Pokkuri), apparent life threatening event (ALTE), Nightmare, and comparing the findings with the literature review results..By this unique method we got a clue for prevention of Sudden Infant Death Syndrome. Results: The revision revealed with no doubt that no study before have studied right-side sleeping position at all. The author determined right side as the only safe position to preventing SIDS. A new definition for SIDS is suggested. The author postulated a Right side position hypothesis (Alfleesy hypothesis) which is a testable hypothesis in front of all researchers for further study . Conclusion: Our results contradict totally all previous studies and recommendations. We recommended strongly the right side position only for sleeping to prevent SIDS. New definition is suggested and a new hypothesis is postulated.

Keywords: SIDS, ALTE, nightmare, forensic sciences

Procedia PDF Downloads 422
7400 Content-Based Color Image Retrieval Based on the 2-D Histogram and Statistical Moments

Authors: El Asnaoui Khalid, Aksasse Brahim, Ouanan Mohammed

Abstract:

In this paper, we are interested in the problem of finding similar images in a large database. For this purpose we propose a new algorithm based on a combination of the 2-D histogram intersection in the HSV space and statistical moments. The proposed histogram is based on a 3x3 window and not only on the intensity of the pixel. This approach can overcome the drawback of the conventional 1-D histogram which is ignoring the spatial distribution of pixels in the image, while the statistical moments are used to escape the effects of the discretisation of the color space which is intrinsic to the use of histograms. We compare the performance of our new algorithm to various methods of the state of the art and we show that it has several advantages. It is fast, consumes little memory and requires no learning. To validate our results, we apply this algorithm to search for similar images in different image databases.

Keywords: 2-D histogram, statistical moments, indexing, similarity distance, histograms intersection

Procedia PDF Downloads 424
7399 Navigating Cyber Attacks with Quantum Computing Leveraging Vulnerabilities and Forensics for Advanced Penetration Testing in Cybersecurity

Authors: Sayor Ajfar Aaron, Md. Mushfiqur Rahman, Sajjat Hossain Abir, Ashif Newaz

Abstract:

This paper examines the transformative potential of quantum computing in the field of cybersecurity, with a focus on advanced penetration testing and forensics. It explores how quantum technologies can be leveraged to identify and exploit vulnerabilities more efficiently than traditional methods and how they can enhance the forensic analysis of cyber-attacks. Through theoretical analysis and practical simulations, this study highlights the enhanced capabilities of quantum algorithms in detecting and responding to sophisticated cyber threats, providing a pathway for developing more resilient cybersecurity infrastructures.

Keywords: cybersecurity, cyber forensics, penetration testing, quantum computing

Procedia PDF Downloads 7
7398 Correlates of Peer Influence and Resistance to HIV/AIDS Counselling and Testing among Students in Tertiary Institutions in Kano State, Nigeria

Authors: A. S. Haruna, M. U. Tambawal, A. A. Salawu

Abstract:

The psychological impact of peer influence on its individual group members, can make them resist HIV/AIDS counselling and testing. This study investigated the correlate of peer influence and resistance to HIV/AIDS counselling and testing among students in tertiary institutions in Kano state, Nigeria. To achieve this, three null hypotheses were postulated and tested. Cross-Sectional Survey Design was employed in which 1512 sample was selected from a student population of 104,841.Simple Random Sampling was used in the selection. A self-developed 20-item scale called Peer Influence and Psychological Resistance Inventory (PIPRI) was used for data collection. Pearson Product Moment Correlation (PPMCC) via test-retest method was applied to estimate a reliability coefficient of 0.86 for the scale. Data obtained was analyzed using t-test and PPMCC at 0.05 level of confidence. Results reveal 26.3% (397) of the respondents being influenced by their peer group, while 39.8% showed resistance. Also, the t-tests and PPMCC statistics were greater than their respective critical values. This shows that there was a significant gender difference in peer influence and a difference between peer influence and resistance to HIV/AIDS counselling and testing. However, a positive relationship between peer influence and resistance to HIV/AIDS counselling and testing was shown. A major recommendation offered suggests the use of reinforcement and social support for positive attitudes and maintenance of safe behaviour among students who patronize HIV/AIDS counselling.

Keywords: peer group influence, HIV/AIDS counselling and testing, psychological resistance, students

Procedia PDF Downloads 365
7397 Design and Analysis of Adaptive Type-I Progressive Hybrid Censoring Plan under Step Stress Partially Accelerated Life Testing Using Competing Risk

Authors: Ariful Islam, Showkat Ahmad Lone

Abstract:

Statistical distributions have long been employed in the assessment of semiconductor devices and product reliability. The power function-distribution is one of the most important distributions in the modern reliability practice and can be frequently preferred over mathematically more complex distributions, such as the Weibull and the lognormal, because of its simplicity. Moreover, it may exhibit a better fit for failure data and provide more appropriate information about reliability and hazard rates in some circumstances. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests for competing risk based on adoptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow Mukherjee-Islam distribution. The point and interval maximum-likelihood estimations are obtained for distribution parameters and tampering coefficient. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: adoptive progressive hybrid censoring, competing risk, mukherjee-islam distribution, partially accelerated life testing, simulation study

Procedia PDF Downloads 318
7396 Scientific Development as Diffusion on a Social Network: An Empirical Case Study

Authors: Anna Keuchenius

Abstract:

Broadly speaking, scientific development is studied in either a qualitative manner with a focus on the behavior and interpretations of academics, such as the sociology of science and science studies or in a quantitative manner with a focus on the analysis of publications, such as scientometrics and bibliometrics. Both come with a different set of methodologies and few cross-references. This paper contributes to the bridging of this divide, by on the on hand approaching the process of scientific progress from a qualitative sociological angle and using on the other hand quantitative and computational techniques. As a case study, we analyze the diffusion of Granovetter's hypothesis from his 1973 paper 'On The Strength of Weak Ties.' A network is constructed of all scientists that have referenced this particular paper, with directed edges to all other researchers that are concurrently referenced with Granovetter's 1973 paper. Studying the structure and growth of this network over time, it is found that Granovetter's hypothesis is used by distinct communities of scientists, each with their own key-narrative into which the hypothesis is fit. The diffusion within the communities shares similarities with the diffusion of an innovation in which innovators, early adopters, and an early-late majority can clearly be distinguished. Furthermore, the network structure shows that each community is clustered around one or few hub scientists that are disproportionately often referenced and seem largely responsible for carrying the hypothesis into their scientific subfield. The larger implication of this case study is that the diffusion of scientific hypotheses and ideas are not the spreading of well-defined objects over a network. Rather, the diffusion is a process in which the object itself dynamically changes in concurrence with its spread. Therefore it is argued that the methodology presented in this paper has potential beyond the scientific domain, in the study of diffusion of other not well-defined objects, such as opinions, behavior, and ideas.

Keywords: diffusion of innovations, network analysis, scientific development, sociology of science

Procedia PDF Downloads 288
7395 The Development and Testing of Greenhouse Comprehensive Environment Control System

Authors: Mohammed Alrefaie, Yaser Miaji

Abstract:

Greenhouses provide a convenient means to grow plants in the best environment. They achieve this by trapping heat from the sunlight and using artificial means to enhance the environment of the greenhouse. This includes controlling factors such as air flow, light intensity and amount of water among others that can have a big impact on plant growth. The aim of the greenhouse is to give maximum yield from plants possible. This report details the development and testing of greenhouse environment control system that can regulate light intensity, airflow and power supply inside the greenhouse. The details of the module development to control these three factors along with results of testing are presented.

Keywords: greenhouse, control system, light intensity, comprehensive environment

Procedia PDF Downloads 456
7394 The Customer Attitude and Behavior of Boutique Hotels in Eastern Part of Thailand

Authors: Anocha Rojanapanich

Abstract:

This research aimed to identify important factors that effect customer satisfaction in boutique hotels and the important factors effecting customer loyalty in returning to boutique hotels. Furthermore, this study also aimed to study demographics, which effect variable factors. Four hundred questionnaires were completed by customers of the boutique hotels. The descriptive statistics used in this paper were percentages, means, and standard deviation (S.D.), while hypothesis testing was done using T-test, Anova, Correlation and Regression to analyze the relationship among those factors. In terms of the purpose in staying, it was found that the largest respondent was for ‘leisure purposes’. While the frequency indicated that most of the customers who stayed ‘once’in the last two years in the hotels had less concern in the hotel’s image than other groups. For customer’s perceived value and income levels had an influence on customer perceived values in both functional value price and emotional value.

Keywords: boutique hotels, customer attitude, customer satisfaction, customer loyalty

Procedia PDF Downloads 279
7393 Nonstationarity Modeling of Economic and Financial Time Series

Authors: C. Slim

Abstract:

Traditional techniques for analyzing time series are based on the notion of stationarity of phenomena under study, but in reality most economic and financial series do not verify this hypothesis, which implies the implementation of specific tools for the detection of such behavior. In this paper, we study nonstationary non-seasonal time series tests in a non-exhaustive manner. We formalize the problem of nonstationary processes with numerical simulations and take stock of their statistical characteristics. The theoretical aspects of some of the most common unit root tests will be discussed. We detail the specification of the tests, showing the advantages and disadvantages of each. The empirical study focuses on the application of these tests to the exchange rate (USD/TND) and the Consumer Price Index (CPI) in Tunisia, in order to compare the Power of these tests with the characteristics of the series.

Keywords: stationarity, unit root tests, economic time series, ADF tests

Procedia PDF Downloads 396
7392 An Improved Two-dimensional Ordered Statistical Constant False Alarm Detection

Authors: Weihao Wang, Zhulin Zong

Abstract:

Two-dimensional ordered statistical constant false alarm detection is a widely used method for detecting weak target signals in radar signal processing applications. The method is based on analyzing the statistical characteristics of the noise and clutter present in the radar signal and then using this information to set an appropriate detection threshold. In this approach, the reference cell of the unit to be detected is divided into several reference subunits. These subunits are used to estimate the noise level and adjust the detection threshold, with the aim of minimizing the false alarm rate. By using an ordered statistical approach, the method is able to effectively suppress the influence of clutter and noise, resulting in a low false alarm rate. The detection process involves a number of steps, including filtering the input radar signal to remove any noise or clutter, estimating the noise level based on the statistical characteristics of the reference subunits, and finally, setting the detection threshold based on the estimated noise level. One of the main advantages of two-dimensional ordered statistical constant false alarm detection is its ability to detect weak target signals in the presence of strong clutter and noise. This is achieved by carefully analyzing the statistical properties of the signal and using an ordered statistical approach to estimate the noise level and adjust the detection threshold. In conclusion, two-dimensional ordered statistical constant false alarm detection is a powerful technique for detecting weak target signals in radar signal processing applications. By dividing the reference cell into several subunits and using an ordered statistical approach to estimate the noise level and adjust the detection threshold, this method is able to effectively suppress the influence of clutter and noise and maintain a low false alarm rate.

Keywords: two-dimensional, ordered statistical, constant false alarm, detection, weak target signals

Procedia PDF Downloads 49
7391 Studying the Relationship Between Washback Effects of IELTS Test on Iranian Language Teachers, Teaching Strategies and Candidates

Authors: Afsaneh Jasmine Majidi

Abstract:

Language testing is an important part of language teaching experience and language learning process as it presents assessment strategies for teachers to evaluate the efficiency of teaching and for learners to examine their outcomes. However, language testing is demanding and challenging because it should provide the opportunity for proper and objective decision. In addition to all the efforts test designers put to design valid and reliable tests, there are some other determining factors which are even more complex and complicated. These factors affect the educational system, individuals, and society, and the impact of the tests vary according to the scope of the test. Seemingly, the impact of a simple classroom assessment is not the same as that of high stake tests such as International English Language Testing System (IELTS). As the importance of the test increases, it affects wider domain. Accordingly, the impacts of high stake tests are reflected not only in teaching, learning strategies but also in society. Testing experts use the term ‘washback’ or ‘impact’ to define the different effects of a test on teaching, learning, and community. This paper first looks at the theoretical background of ‘washback’ and ‘impact’ in language testing by reviewing of relevant literature in the field and then investigates washback effects of IELTS test of on Iranian IELTS teachers and students. The study found significant relationship between the washback effect of IELTS test and teaching strategies of Iranian IELTS teachers as well as performance of Iranian IELTS candidates and their community.

Keywords: high stake tests, IELTS, Iranian Candidates, language testing, test impact, washback

Procedia PDF Downloads 295
7390 Intentional Learning vs Incidental Learning

Authors: Shahbaz Ahmed

Abstract:

This study is conducted to demonstrate the knowledge of intentional learning and incidental learning. Hypothesis of this experiment is intentional learning is better than incidental learning, participants were demonstrated and were asked to learn the 10 nonsense syllables in a specific sequence from the colored cards in the end they were asked to recall the background color of each card instead of nonsense syllables. Independent variables of the experiment are the colored cards containing nonsense syllables which are to be memorized by the participants, dependent variables are the number of correct responses made by the participant. The findings of the experiment concluded that intentional learning is better than incidental learning, hence hypothesis is proved.

Keywords: intentional learning, incidental learning, non-sense syllable cards, score sheets

Procedia PDF Downloads 489
7389 Electricity Market Categorization for Smart Grid Market Testing

Authors: Rebeca Ramirez Acosta, Sebastian Lenhoff

Abstract:

Decision makers worldwide need to determine if the implementation of a new market mechanism will contribute to the sustainability and resilience of the power system. Due to smart grid technologies, new products in the distribution and transmission system can be traded; however, the impact of changing a market rule will differ between several regions. To test systematically those impacts, a market categorization has been compiled and organized in a smart grid market testing toolbox. This toolbox maps all actual energy products and sets the basis for running a co-simulation test with the new rule to be implemented. It will help to measure the impact of the new rule, based on the sustainable and resilience indicators.

Keywords: co-simulation, electricity market, smart grid market, market testing

Procedia PDF Downloads 154
7388 Structural Evaluation of Airfield Pavement Using Finite Element Analysis Based Methodology

Authors: Richard Ji

Abstract:

Nondestructive deflection testing has been accepted widely as a cost-effective tool for evaluating the structural condition of airfield pavements. Backcalculation of pavement layer moduli can be used to characterize the pavement existing condition in order to compute the load bearing capacity of pavement. This paper presents an improved best-fit backcalculation methodology based on deflection predictions obtained using finite element method (FEM). The best-fit approach is based on minimizing the squared error between falling weight deflectometer (FWD) measured deflections and FEM predicted deflections. Then, concrete elastic modulus and modulus of subgrade reaction were back-calculated using Heavy Weight Deflectometer (HWD) deflections collected at the National Airport Pavement Testing Facility (NAPTF) test site. It is an alternative and more versatile method in considering concrete slab geometry and HWD testing locations compared to methods currently available.

Keywords: nondestructive testing, pavement moduli backcalculation, finite element method, concrete pavements

Procedia PDF Downloads 137
7387 Predicting Automotive Interior Noise Including Wind Noise by Statistical Energy Analysis

Authors: Yoshio Kurosawa

Abstract:

The applications of soundproof materials for reduction of high frequency automobile interior noise have been researched. This paper presents a sound pressure prediction technique including wind noise by Hybrid Statistical Energy Analysis (HSEA) in order to reduce weight of acoustic insulations. HSEA uses both analytical SEA and experimental SEA. As a result of chassis dynamo test and road test, the validity of SEA modeling was shown, and utility of the method was confirmed.

Keywords: vibration, noise, road noise, statistical energy analysis

Procedia PDF Downloads 310
7386 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest

Procedia PDF Downloads 87