Search results for: market prediction
3951 Interpretable Deep Learning Models for Medical Condition Identification
Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji
Abstract:
Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.Keywords: deep learning, interpretability, attention, big data, medical conditions
Procedia PDF Downloads 913950 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities
Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun
Abstract:
As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning
Procedia PDF Downloads 563949 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death
Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar
Abstract:
In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death
Procedia PDF Downloads 3423948 A Breakthrough Improvement Brought by Taxi-Calling APPs for Taxi Operation Level
Authors: Yuan-Lin Liu, Ye Li, Tian Xia
Abstract:
Taxi-calling APPs have been used widely, while brought both benefits and a variety of issues for the taxi market. Many countries do not know whether the benefits are remarkable than the issues or not. This paper established a comparison between the basic scenario (2009-2012) and a taxi-calling software usage scenario (2012-2015) to explain the impact of taxi-calling APPs. The impacts of taxi-calling APPs illustrated by the comparison results are: 1) The supply and demand distribution is more balanced, extending from the city center to the suburb. The availability of taxi service has been improved in low density areas, thin market attribute has also been improved; 2)The ratio of short distance taxi trip decreased, long distance service increased, the utilization of mileage increased, and the rate of empty decreased; 3) The popularity of taxi-calling APPs was able to reduce the average empty distance, cruise time, empty mileage rate and average times of loading passengers, can also enhance the average operating speed, improve the taxi operating level, and reduce social cost although there are some disadvantages. This paper argues that the taxi industry and government can establish an integrated third-party credit information platform based on credit evaluated by the data of the drivers’ driving behaviors to supervise the drivers. Taxi-calling APPs under fully covered supervision in the mobile Internet environment will become a new trend.Keywords: taxi, taxi-calling APPs, credit, scenario comparison
Procedia PDF Downloads 2543947 Impact of Changes of the Conceptual Framework for Financial Reporting on the Indicators of the Financial Statement
Authors: Nadezhda Kvatashidze
Abstract:
The International Accounting Standards Board updated the conceptual framework for financial reporting. The main reason behind it is to resolve the tasks of the accounting, which are caused by the market development and business-transactions of a new economic content. Also, the investors call for higher transparency of information and responsibility for the results in order to make a more accurate risk assessment and forecast. All these make it necessary to further develop the conceptual framework for financial reporting so that the users get useful information. The market development and certain shortcomings of the conceptual framework revealed in practice require its reconsideration and finding new solutions. Some issues and concepts, such as disclosure and supply of information, its qualitative characteristics, assessment, and measurement uncertainty had to be supplemented and perfected. The criteria of recognition of certain elements (assets and liabilities) of reporting had to be updated, too and all this is set out in the updated edition of the conceptual framework for financial reporting, a comprehensive collection of concepts underlying preparation of the financial statement. The main objective of conceptual framework revision is to improve financial reporting and development of clear concepts package. This will support International Accounting Standards Board (IASB) to set common “Approach & Reflection” for similar transactions on the basis of mutually accepted concepts. As a result, companies will be able to develop coherent accounting policies for those transactions or events that are occurred from particular deals to which no standard is used or when standard allows choice of accounting policy.Keywords: conceptual framework, measurement basis, measurement uncertainty, neutrality, prudence, stewardship
Procedia PDF Downloads 1263946 Wind Power Assessment for Turkey and Evaluation by APLUS Code
Authors: Ibrahim H. Kilic, A. B. Tugrul
Abstract:
Energy is a fundamental component in economic development and energy consumption is an index of prosperity and the standard of living. The consumption of energy per capita has increased significantly over the last decades, as the standard of living has improved. Turkey’s geographical location has several advantages for extensive use of wind power. Among the renewable sources, Turkey has very high wind energy potential. Information such as installation capacity of wind power plants in installation, under construction and license stages in the country are reported in detail. Some suggestions are presented in order to increase the wind power installation capacity of Turkey. Turkey’s economic and social development has led to a massive increase in demand for electricity over the last decades. Since the Turkey has no major oil or gas reserves, it is highly dependent on energy imports and is exposed to energy insecurity in the future. But Turkey does have huge potential for renewable energy utilization. There has been a huge growth in the construction of wind power plants and small hydropower plants in recent years. To meet the growing energy demand, the Turkish Government has adopted incentives for investments in renewable energy production. Wind energy investments evaluated the impact of feed-in tariffs (FIT) based on three scenarios that are optimistic, realistic and pessimistic with APLUS software that is developed for rational evaluation for energy market. Results of the three scenarios are evaluated in the view of electricity market for Turkey.Keywords: APLUS, energy policy, renewable energy, wind power, Turkey
Procedia PDF Downloads 3033945 Implementation of Deep Neural Networks for Pavement Condition Index Prediction
Authors: M. Sirhan, S. Bekhor, A. Sidess
Abstract:
In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction
Procedia PDF Downloads 1373944 The Role of Learning in Stimulation Policies to Increase Participation in Lifelong Development: A Government Policy Analysis
Authors: Björn de Kruijf, Arjen Edzes, Sietske Waslander
Abstract:
In an ever-quickly changing society, lifelong development is seen as a solution to labor market problems by politicians and policymakers. In this paper, we investigate how policy instruments are used to increase participation in lifelong development and on which behavioral principles policy is based. Digitization, automation, and an aging population change society and the labor market accordingly. Skills that were once most sought after in the workforce can become abundantly present. For people to remain relevant in the working population, they need to continue adapting new skills useful in the current labor market. Many reports have been written that focus on the role of lifelong development in this changing society and how lifelong development can help keep people adapt and stay relevant. Inspired by these reports, governments have implemented a broad range of policies to support participation in lifelong development. The question we ask ourselves is how government policies promote participation in lifelong development. This stems from a complex interplay of policy instruments and learning. Regulation, economic and soft instruments can be combined to promote lifelong development, and different types of education further complex policies on lifelong development. Literature suggests that different stages in people’s lives might warrant different methods of learning. Governments could anticipate this in their policies. In order to influence people’s behavior, the government can tap into a broad range of sociological, psychological, and (behavioral) economic principles. The traditional economic assumption that behavior is rational is known to be only partially true, and the government can use many biases in human behavior to stimulate participation in lifelong development. In this paper, we also try to find which biases the government taps into to promote participation if they tap into any of these biases. The goal of this paper is to analyze government policies intended to promote participation in lifelong development. To do this, we develop a framework to analyze the policies on lifelong development. We specifically incorporate the role of learning and the behavioral principles underlying policy instruments in the framework. We apply this framework to the case of the Netherlands, where we examine a set of policy documents. We single out the policies the government has put in place and how they are vertically and horizontally related. Afterward, we apply the framework and classify the individual policies by policy instrument and by type of learning. We find that the Dutch government focuses on formal and non-formal learning in their policy instruments. However, the literature suggests that learning at a later age is mainly done in an informal manner through experiences.Keywords: learning, lifelong development, policy analysis, policy instruments
Procedia PDF Downloads 823943 Changes in Consumption Pattern of Western Consumers and Its Effect to the Ottoman Oriental Carpet-Making Industry
Authors: Emine Zeytinli
Abstract:
Ottoman carpets were depicted in Renaissance painting while they were exported commercially. The carpets were highly demanded and used by the middle and upper classes of Western European countries. The motifs, designs, patterns, and ornamentation of these carpets were decorative objects of luxury for Western European residences as well as paintings. Oriental carpets found their way into European market already from the medieval times to the present century. They were considered as luxury items first, however, demanded by middle classes in Europe and North America within the nineteenth century. This century brought unprecedented changes in production and consumption in the world. Expanding industries created quick urbanization, changed the city life and new types of goods dominated the entire century. Increases in income allowed Europeans to spend on luxury items, consumers taste changed in number of ways including furniture and decoration. Use of a carpet in the orient lifestyle often considered as an art object with Western aesthetic sensibility. A carpet with an oriental character, an essential part of home decoration, was highly appreciated for floor, table covering and wall hanging. Turkish carpets with distinctive classical style, patterns, and colours were changed for the tastes of European consumers. This paper attempts to analyse how the taste and preferences of European and American consumers increased their buying of oriental objects namely carpets. The production of local hand woven carpet industry developed, carpet factories were set up and special weaving schools were opened in some major waving centres, and carpet weaving became one of the main manufacturing and export commodity of the empire. All of these attempts increased the reputation and market share in international market. The industry flourished, commercially operated carpet looms, sales revenues and export increased unprecedentedly. British and Ottoman archival documents, parliamentary papers and travel notes were used to analysed above mention effect on how the foreign demand changed designs of carpets and the business itself, how the production in households moved to the commercial premises and a flourished the industry.Keywords: consumption patterns, carpet weaving, ottoman oriental carpets, commercialisation
Procedia PDF Downloads 1383942 Validation of Nutritional Assessment Scores in Prediction of Mortality and Duration of Admission in Elderly, Hospitalized Patients: A Cross-Sectional Study
Authors: Christos Lampropoulos, Maria Konsta, Vicky Dradaki, Irini Dri, Konstantina Panouria, Tamta Sirbilatze, Ifigenia Apostolou, Vaggelis Lambas, Christina Kordali, Georgios Mavras
Abstract:
Objectives: Malnutrition in hospitalized patients is related to increased morbidity and mortality. The purpose of our study was to compare various nutritional scores in order to detect the most suitable one for assessing the nutritional status of elderly, hospitalized patients and correlate them with mortality and extension of admission duration, due to patients’ critical condition. Methods: Sample population included 150 patients (78 men, 72 women, mean age 80±8.2). Nutritional status was assessed by Mini Nutritional Assessment (MNA full, short-form), Malnutrition Universal Screening Tool (MUST) and short Nutritional Appetite Questionnaire (sNAQ). Sensitivity, specificity, positive and negative predictive values and ROC curves were assessed after adjustment for the cause of current admission, a known prognostic factor according to previously applied multivariate models. Primary endpoints were mortality (from admission until 6 months afterwards) and duration of hospitalization, compared to national guidelines for closed consolidated medical expenses. Results: Concerning mortality, MNA (short-form and full) and SNAQ had similar, low sensitivity (25.8%, 25.8% and 35.5% respectively) while MUST had higher sensitivity (48.4%). In contrast, all the questionnaires had high specificity (94%-97.5%). Short-form MNA and sNAQ had the best positive predictive value (72.7% and 78.6% respectively) whereas all the questionnaires had similar negative predictive value (83.2%-87.5%). MUST had the highest ROC curve (0.83) in contrast to the rest questionnaires (0.73-0.77). With regard to extension of admission duration, all four scores had relatively low sensitivity (48.7%-56.7%), specificity (68.4%-77.6%), positive predictive value (63.1%-69.6%), negative predictive value (61%-63%) and ROC curve (0.67-0.69). Conclusion: MUST questionnaire is more advantageous in predicting mortality due to its higher sensitivity and ROC curve. None of the nutritional scores is suitable for prediction of extended hospitalization.Keywords: duration of admission, malnutrition, nutritional assessment scores, prognostic factors for mortality
Procedia PDF Downloads 3463941 Assessment of the Egyptian Agricultural Foreign Trade with Common Market for Eastern and Southern Africa Countries
Authors: Doaa H. I. Mahmoud, El-Said M. Elsharkawy, Saad Z. Soliman, Soher E. Mustfa
Abstract:
The opening of new promising foreign markets is one of the objectives of Egypt’s foreign trade policies, especially for agricultural exports. This study aims at the examination of the commodity structure of the Egyptian agricultural imports and exports with the COMESA countries. In addition, estimation of the surplus/deficit of the Egyptian commodities and agricultural balance with these countries is made. Time series data covering the period 2004-2016 is used. Estimation of the growth function along with the derivation of the annual growth rates of the study’s variables is made. Some of the results of the study period display the following: (1) The average total Egyptian exports to the COMESA (Common Market for Eastern and Southern Africa) countries is estimated at 1,491 million dollars, with an annual growth rate of 14.4% (214.7 million dollars). (2) The average annual Egyptian agricultural exports to these economies is estimated at 555 million dollars, with an annual growth rate of 19.4% (107.7 million dollars). (3) The average annual value of agricultural imports from the COMESA countries is set at 289 Million Dollars, with an annual growth rate of 14.4% (41.6 million dollars). (4) The study shows that there is a continuous surplus in the agricultural balance with these economies, whilst having a deficit in the raw-materials agricultural balance, as well as the balance of input requirements with these countries.Keywords: COMESA, Egypt, growth rates, trade balance
Procedia PDF Downloads 2093940 Remittances, Unemployement and Demographic Changes between Tunisia and Europe
Authors: Hajer Habib, Ghazi Boulila
Abstract:
The objective of this paper is to present our contribution to the theoretical literature through a simple theoretical model dealing with the effect of transferring funds on the labor market of the countries of origin and on the other hand to test this relationship empirically in the case of Tunisia. The methodology used consists of estimating a panel of the nine main destinations of the Tunisian diaspora in Europe between 1994 and 2014 in order to better value the net effect of these migratory financial flows on unemployment through population growth. The empirical results show that the main factors explaining the decision to emigrate are the economic factors related mainly to the income differential, the demographic factors related to the differential age structure of the origin and host populations, and the cultural factors linked basically to the mastery of the language. Indeed, the stock of migrants is one of the main determinants of the transfer of migratory funds to Tunisia. But there are other variables that do not lack importance such as the economic conditions linked by the host countries. This shows that Tunisian migrants react more to economic conditions in European countries than in Tunisia. The economic situation of European countries dominates the numbers of emigrants as an explanatory factor for the amount of transfers from Tunisian emigrants to their country of origin. Similarly, it is clear that there is an indirect effect of transfers on unemployment in Tunisia. This suggests that the demographic transition conditions the effects of transferring funds on the level of unemployment.Keywords: demographic changes, international migration, labor market, remittances
Procedia PDF Downloads 1503939 Integrating Carbon Footprint into Supply Chain Management of Manufacturing Companies: Sri Lanka
Authors: Shirekha Layangani, Suneth Dharmaparakrama
Abstract:
When the manufacturing industry is concerned the Environment Management System (EMS) is a common term. Currently most organizations have obtained the environmental standard certification, ISO 14001. In the Sri Lankan context even though the organizations adopt Environmental Management, a very limited number of companies tend to calculate their Carbon Footprints. This research discusses the demotivating factors of manufacturing organizations in Sri Lanka to integrate calculation of carbon footprint into their supply chains. Further it also identifies the benefits that manufacturing organizations can gain by implementing calculation of carbon footprint. The manufacturing companies listed under “ISO 14001” certification were considered in this study in order to investigate the problems mentioned above. 100% enumeration was used when the surveys were carried out. In order to gather essential data two surveys were designed to be done among manufacturing organizations that are currently engaged in calculating their carbon footprint and the organizations that have not. The survey among the first set of manufacturing organizations revealed the benefits the organizations were able to gain by implementing calculation of carbon footprint. The latter set organizations revealed the demotivating factors that have influenced not to integrate calculation of carbon footprint into their supply chains. This paper has summarized the results obtained by the surveys and segregated depending on the market share of the manufacturing organizations. Further it has indicated the benefits that can be obtained by implementing carbon footprint calculation, depending on the market share of the manufacturing entity. Finally the research gives suggestions to manufacturing organizations on applicability of adopting carbon footprint calculation depending on the benefits that can be obtained.Keywords: carbon footprint, environmental management systems (EMS), benefits of carbon footprint, ISO14001
Procedia PDF Downloads 3743938 An Investigation of the Relationship Between Privacy Crisis, Public Discourse on Privacy, and Key Performance Indicators at Facebook (2004–2021)
Authors: Prajwal Eachempati, Laurent Muzellec, Ashish Kumar Jha
Abstract:
We use Facebook as a case study to investigate the complex relationship between the firm’s public discourse (and actions) surrounding data privacy and the performance of a business model based on monetizing user’s data. We do so by looking at the evolution of public discourse over time (2004–2021) and relate topics to revenue and stock market evolution Drawing from archival sources like Zuckerberg We use LDA topic modelling algorithm to reveal 19 topics regrouped in 6 major themes. We first show how, by using persuasive and convincing language that promises better protection of consumer data usage, but also emphasizes greater user control over their own data, the privacy issue is being reframed as one of greater user control and responsibility. Second, we aim to understand and put a value on the extent to which privacy disclosures have a potential impact on the financial performance of social media firms. There we found significant relationship between the topics pertaining to privacy and social media/technology, sentiment score and stock market prices. Revenue is found to be impacted by topics pertaining to politics and new product and service innovations while number of active users is not impacted by the topics unless moderated by external control variables like Return on Assets and Brand Equity.Keywords: public discourses, data protection, social media, privacy, topic modeling, business models, financial performance
Procedia PDF Downloads 923937 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 553936 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network
Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert
Abstract:
The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy
Procedia PDF Downloads 1353935 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets
Authors: Yih-Wenn Laih
Abstract:
In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.Keywords: co-movement, depreciation of Yen, rank correlation, stock market
Procedia PDF Downloads 2313934 The Prediction of Evolutionary Process of Coloured Vision in Mammals: A System Biology Approach
Authors: Shivani Sharma, Prashant Saxena, Inamul Hasan Madar
Abstract:
Since the time of Darwin, it has been considered that genetic change is the direct indicator of variation in phenotype. But a few studies in system biology in the past years have proposed that epigenetic developmental processes also affect the phenotype thus shifting the focus from a linear genotype-phenotype map to a non-linear G-P map. In this paper, we attempt at explaining the evolution of colour vision in mammals by taking LWS/ Long-wave sensitive gene under consideration.Keywords: evolution, phenotypes, epigenetics, LWS gene, G-P map
Procedia PDF Downloads 5213933 Extent of Fruit and Vegetable Waste at Wholesaler Stage of the Food Supply Chain in Western Australia
Authors: P. Ghosh, S. B. Sharma
Abstract:
The growing problem of food waste is causing unacceptable economic, environmental and social impacts across the globe. In Australia, food waste is estimated at about AU$8 billion per year; however, information on the extent of wastage at different stages of the food value chain from farm to fork is very limited. This study aims to identify causes for and extent of food waste at wholesaler stage of the food value chain in the state of Western Australia. It also explores approaches applied to reduce and utilize food waste by the wholesalers. The study was carried out at Perth city market in Caning Vale, the main wholesale distribution centre for fruits and vegetables in Western Australia. A survey questionnaire was prepared and shared with 51 wholesalers and their responses to 10 targeted questions on quantity of produce (fruits and vegetables) delivery received and further supplied, reasons for waste generation and innovations applied or being considered to reduce and utilize food waste. Data were computed using the Statistical Package for the Social Sciences (SPSS version 21). Among the wholesalers 52% were primary wholesalers (buy produce directly from growers) and 48% were secondary wholesalers (buy produce in bulk from major wholesalers and supply to the local retail market, caterers, and customers with specific requirements). Average fruit and vegetable waste was 180 Kilogram per week per primary wholesaler and 30 Kilogram per secondary wholesaler. Based on this survey, the fruit and vegetable waste at wholesaler stage was estimated at about 286 tonnes per year. The secondary wholesalers distributed pre-ordered commodities, which minimized the potential to cause waste. Non-parametric test (Mann Whitney test) was carried out to assess contributions of wholesalers to waste generation. Over 56% of secondary wholesalers generally had nothing to bin as waste. Pearson’s correlation coefficient analysis showed positive correlation (r = 0.425; P=0.01) between the quantity of produce received and waste generated. Low market demand was the predominant reason identified by the wholesalers for waste generation. About a third of the wholesalers suggested that high cosmetic standards for fruits and vegetables - appearance, shape, and size - should be relaxed to reduce waste. Donation of unutilized fruits and vegetables to charity was overwhelmingly (95%) considered as one of the best options for utilization of discarded produce. The extent of waste at other stages of fruit and vegetable supply chain is currently being studied.Keywords: food waste, fruits and vegetables, supply chain, waste generation
Procedia PDF Downloads 3123932 Tasting and Touring: Chinese Consumers’ Experiences with Australian Wine and Winery Tour: A Case Study of Sirromet Wines, Queensland
Authors: Ning Niu
Abstract:
The study hinges on consumer taste, food industry (wine production) and cultural consumption (vineyard tourism) which are related to the Chinese market, consumers, and visitors traveling to Australian vineyards. The research topic can be summed up as: the economic importance of the Chinese market on Australian wine production; the economic importance of the Chinese market have an impact on how Australian wine is produced or packaged; the impact of mass Chinese wine tourism on Australian vineyards; the gendered and cultured experience of wine tourism for Chines visitors. This study aims to apply the theories of Pierre Bourdieu into the research in food industry and cultural consumption; investigate Chinese experiences with Australian wine products and vineyard tours; to explore the cultural, gendered and class influences on their experiences. The academic background covers the concepts of habitus, taste, capital proposed by Pierre Bourdieu along with long-lasting concepts within China’s cultural context including mianzi (face, dignity/honor/hierarchy) and guanxi (connections/social network), in order to develop new perspectives to study the tastes of Chinese tourists coming to Australia for wine experiences. The documents cited from Australian government or industries will be interpreted, and the analysis of data will constitute the economic background for this current study. The study applies qualitative research and draws from the fieldwork, choosing ethnographic observation, interviews, personal experiences and discursive analysis of government documents and tourism documents. The expected sample size includes three tourism professionals, two or three local Australian wine producers, and 20 to 30 Chinese wine consumers and visitors travelling to Australian vineyards. An embodied ethnography will be used to observe the Chinese participants’ feelings, thoughts, and experiences of their engagement with Australian wine and vineyards. The researcher will interview with Chinese consumers, tourism professionals, and Australian winemakers to collect primary data. Note-taking, picture-taking, and audio-recording will be adopted with informants’ permissions. Personal or group interview will be last for 30 and 60 minutes respectively. Personal experiences of the researcher have been analyzed to respond to some research questions, and have accumulated part of primary data (e.g., photos and stories) to discover how 'mianzi' and 'guanxi' influence Australian wine and tourism industries to meet the demands’ of Chinese consumers. At current stage, the secondary data from analysis of official and industrial documents has proved the economic importance of Chinese market is influencing Australian wine and tourism industries. And my own experiences related to this study, in some sense, has proved the Chinese cultural concepts (mianzi and guanxi) are influencing the Australian wine production and package along with vineyard tours. Future fieldwork will discover more in this research realm, contribute more to knowledge.Keywords: habitus, taste, capital, mianzi, guanxi
Procedia PDF Downloads 1303931 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 423930 Alignment between Governance Structures and Food Safety Standards on the Shrimp Supply Chain in Indonesia
Authors: Maharani Yulisti, Amin Mugera, James Fogarty
Abstract:
Food safety standards have received significant attention in the fisheries global market due to health issues, free trade agreements, and increasing aquaculture production. Vertical coordination throughout the supply chain of fish producing and exporting countries is needed to meet food safety demands imposed by importing countries. However, the complexities of the supply chain governance structures and difficulties in standard implementation can generate safety uncertainty and high transaction costs. Using a Transaction Cost Economics framework, this paper examines the alignment between food safety standards and the governance structures in the shrimp supply chain in Indonesia. We find the supply chain is organized closer to the hierarchy-like governance structure where private standard (organic standard) are implemented and more towards a market-like governance structure where public standard (IndoGAP certification) are more prevalent. To verify the statements, two cases are examined from Sidoarjo district as a centre of shrimp production in Indonesia. The results show that public baseline FSS (Food Safety Standards) need additional mechanism to achieve a coordinated chain-wide response because uncertainty, asset specificity, and performance measurement problems are high in this chain. Organic standard as private chain-wide FSS is more efficient because it has been achieved by hierarchical-like type of governance structure.Keywords: governance structure, shrimp value chain, food safety standards, transaction costs economics
Procedia PDF Downloads 3793929 Hidden Markov Model for Financial Limit Order Book and Its Application to Algorithmic Trading Strategy
Authors: Sriram Kashyap Prasad, Ionut Florescu
Abstract:
This study models the intraday asset prices as driven by Markov process. This work identifies the latent states of the Hidden Markov model, using limit order book data (trades and quotes) to continuously estimate the states throughout the day. This work builds a trading strategy using estimated states to generate signals. The strategy utilizes current state to recalibrate buy/ sell levels and the transition between states to trigger stop-loss when adverse price movements occur. The proposed trading strategy is tested on the Stevens High Frequency Trading (SHIFT) platform. SHIFT is a highly realistic market simulator with functionalities for creating an artificial market simulation by deploying agents, trading strategies, distributing initial wealth, etc. In the implementation several assets on the NASDAQ exchange are used for testing. In comparison to a strategy with static buy/ sell levels, this study shows that the number of limit orders that get matched and executed can be increased. Executing limit orders earns rebates on NASDAQ. The system can capture jumps in the limit order book prices, provide dynamic buy/sell levels and trigger stop loss signals to improve the PnL (Profit and Loss) performance of the strategy.Keywords: algorithmic trading, Hidden Markov model, high frequency trading, limit order book learning
Procedia PDF Downloads 1513928 Managing Expatriates' Return: Repatriation Practices in a Sample of Firms in Portugal
Authors: Ana Pinheiro, Fatima Suleman
Abstract:
Literature has revealed strong awareness of companies in regard of expatriation, but issues associated with repatriation of employees after an international assignment have been overlooked. Repatriation is one of the most challenging human resource practices that affect how companies benefit from acquired skills and high potential employees; and gain competitive advantage through network developed during expatriation. However, empirical evidence achieved so far suggests that expatriates have been disappointed because companies lack an effective repatriation strategy. Repatriates’ professional and emotional needs are often unrecognized, while repatriation is perceived as a non-issue by companies. The underlying assumption is that the return to parent company, and original country, culture and language does not demand for any particular support. Unfortunately, this basic view has non-negligible consequences on repatriates, especially on expatriate retention and turnover rates after expatriation. The goal of our study is to examine the specific policies and practices adopted by companies to support employees after an international assignment. We assume that expatriation is process which ends with repatriation. The latter is such a crucial issue as the expatriation and require due attention through appropriate design of human resource management policies and tools. For this purpose, we use data from a qualitative research based on interviews to a sample of firms operating in Portugal. We attempt to compare how firms accommodate the concerns with repatriation in their policies and practices. Therefore, the interviews collect data on both expatriation and repatriation process, namely the selection and skills of candidates to expatriation, training, mentoring, communication and pay policies. Portuguese labor market seems to be an interesting case study for mainly two reasons. On the one hand, Portuguese Government is encouraging companies to internationalize in the context of an external market-oriented growth model. On the other hand, expatriation is being perceived as a job opportunity in the context of high unemployment rates of both skilled and non-skilled. This is an ongoing research and the data collected until now indicate that companies follow the pattern described in the literature. The interviewed companies recognize the higher relevance of repatriation process than expatriation, but disregard specific human resource policies. They have perceived that unfavorable labor market conditions discourage mobility across companies. It should be stressed that companies underline that employees enhanced the relevance of stable jobs and attach far less importance to career development and other benefits after expatriation. However, there are still cases of turnover and difficulties of retention. Managers’ report non-negligible cases of turnover associated with lack of effective repatriation programs and non-recognition of good performance. Repatriates seem to having acquired entrepreneurial spirit and skills and often create their own company. These results suggest that even in the context of worsening labor market conditions, there should be greater awareness of the need to retain talents, experienced and highly skills employees. Ultimately, other companies poach invaluable assets, while internationalized companies risk being training providers.Keywords: expatriates, expatriation, international management, repatriation
Procedia PDF Downloads 3363927 Intellectual Capital as Resource Based Business Strategy
Authors: Vidya Nimkar Tayade
Abstract:
Introduction: Intellectual capital of an organization is a key factor to success. Many companies invest a huge amount in their Research and development activities. Any innovation is helpful not only to that particular company but also to many other companies, industry and mankind as a whole. Companies undertake innovative changes for increasing their capital profitability and indirectly increase in pay packages of their employees. The quality of human capital can also improve due to such positive changes. Employees become more skilled and experienced due to such innovations and inventions. For increasing intangible capital, the author has referred to a couple of books and referred case studies to come to a conclusion. Different charts and tables are also referred to by the author. Case studies are more important because they are proven and established techniques. They enable students to apply theoretical concepts in real-world situations. It gives solutions to an open-ended problem with multiple potential solutions. There are three different strategies for undertaking intellectual capital increase. They are: Research push strategy/ Technology pushed approach, Market pull strategy/ approach and Open innovation strategy/approach. Research push strategy, In this strategy, research is undertaken and innovation is achieved on its own. After invention inventor company protects such invention and finds buyers for such invention. In this way, the invention is pushed into the market. In this method, research and development are undertaken first and the outcome of this research is commercialized. Market pull strategy, In this strategy, commercial opportunities are identified first and our research is concentrated in that particular area. For solving a particular problem, research is undertaken. It becomes easier to commercialize this type of invention. Because what is the problem is identified first and in that direction, research and development activities are carried on. Open invention strategy, In this type of research, more than one company enters into an agreement of research. The benefits of the outcome of this research will be shared by both companies. Internal and external ideas and technologies are involved. These ideas are coordinated and then they are commercialized. Due to globalization, people from the outside company are also invited to undertake research and development activities. Remuneration of employees of both the companies can increase and the benefit of commercialization of such invention is also shared by both the companies. Conclusion: In modern days, not only can tangible assets be commercialized, but also intangible assets can also be commercialized. The benefits of such an invention can be shared by more than one company. Competition can become more meaningful. Pay packages of employees can improve. It Is a need for time to adopt such strategies to benefit employees, competitors, stakeholders.Keywords: innovation, protection, management, commercialization
Procedia PDF Downloads 1683926 Applying Semi-Automatic Digital Aerial Survey Technology and Canopy Characters Classification for Surface Vegetation Interpretation of Archaeological Sites
Authors: Yung-Chung Chuang
Abstract:
The cultural layers of archaeological sites are mainly affected by surface land use, land cover, and root system of surface vegetation. For this reason, continuous monitoring of land use and land cover change is important for archaeological sites protection and management. However, in actual operation, on-site investigation and orthogonal photograph interpretation require a lot of time and manpower. For this reason, it is necessary to perform a good alternative for surface vegetation survey in an automated or semi-automated manner. In this study, we applied semi-automatic digital aerial survey technology and canopy characters classification with very high-resolution aerial photographs for surface vegetation interpretation of archaeological sites. The main idea is based on different landscape or forest type can easily be distinguished with canopy characters (e.g., specific texture distribution, shadow effects and gap characters) extracted by semi-automatic image classification. A novel methodology to classify the shape of canopy characters using landscape indices and multivariate statistics was also proposed. Non-hierarchical cluster analysis was used to assess the optimal number of canopy character clusters and canonical discriminant analysis was used to generate the discriminant functions for canopy character classification (seven categories). Therefore, people could easily predict the forest type and vegetation land cover by corresponding to the specific canopy character category. The results showed that the semi-automatic classification could effectively extract the canopy characters of forest and vegetation land cover. As for forest type and vegetation type prediction, the average prediction accuracy reached 80.3%~91.7% with different sizes of test frame. It represented this technology is useful for archaeological site survey, and can improve the classification efficiency and data update rate.Keywords: digital aerial survey, canopy characters classification, archaeological sites, multivariate statistics
Procedia PDF Downloads 1423925 Achieving Product Robustness through Variation Simulation: An Industrial Case Study
Authors: Narendra Akhadkar, Philippe Delcambre
Abstract:
In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation
Procedia PDF Downloads 1643924 Electricity Load Modeling: An Application to Italian Market
Authors: Giovanni Masala, Stefania Marica
Abstract:
Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression
Procedia PDF Downloads 3953923 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron
Authors: Filippo Portera
Abstract:
Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.Keywords: loss, binary-classification, MLP, weights, regression
Procedia PDF Downloads 953922 Investigating the Impact of Super Bowl Participation on Local Economy: A Perspective of Stock Market
Authors: Rui Du
Abstract:
This paper attempts to assess the impact of a major sporting event —the Super Bowl on the local economies. The identification strategy is to compare the winning and losing cities at the National Football League (NFL) conference finals under the assumption of similar pre-treatment trends. The stock market performances of companies headquartered in these cities are used to capture the sudden changes in local economic activities during a short time span. The exogenous variations in the football game outcome allow a straightforward difference-in-differences approach to identify the effect. This study finds that the post-event trends in winning and losing cities diverge despite the fact that both cities have economically and statistically similar pre-event trends. Empirical analysis provides suggestive evidence of a positive, significant local economic impact of conference final wins, possibly through city image enhancement. Further empirical evidence shows the presence of heterogeneous effects across industrial sectors, suggesting that city image enhancing the effect of the Super Bowl participation is empirically relevant for the changes in the composition of local industries. Also, this study also adopts a similar strategy to examine the local economic impact of Super Bowl successes, however, finds no statistically significant effect.Keywords: Super Bowl Participation, local economies, city image enhancement, difference-in-differences, industrial sectors
Procedia PDF Downloads 240