Search results for: algorithmic pricing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 379

Search results for: algorithmic pricing

49 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 88
48 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 411
47 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 159
46 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 366
45 Subsidying Local Health Policy Programs as a Public Management Tool in the Polish Health Care System

Authors: T. Holecki, J. Wozniak-Holecka, P. Romaniuk

Abstract:

Due to the highly centralized model of financing health care in Poland, local self-government rarely undertook their own initiatives in the field of public health, particularly health promotion. However, since 2017 the possibility of applying for a subsidy to health policy programs has been allowed, with the additional resources to be retrieved from the National Health Fund, which is the dominant payer in the health system. The amount of subsidy depends on the number of inhabitants in a given unit and ranges about 40% of the total cost of the program. The aim of this paper is to assess the impact of newly implemented solutions in financing health policy on the management of public finances, as well as on the activity provided by local self-government in health promotion. An effort to estimate the amount of expenses that both local governments, and the National Health Fund, spent on local health policy programs while implementing the new solutions. The research method is the analysis of financial data obtained from the National Health Fund and from local government units, as well as reports published by the Agency for Health Technology Assessment and Pricing, which holds substantive control over the health policy programs, and releases permission for their implementation. The study was based on a comparative analysis of expenditures on the implementation of health programs in Poland in years 2010-2018. The presentation of the results includes the inclusion of average annual expenditures of local government units per 1 inhabitant, the total number of positively evaluated applications and the percentage share in total expenditures of local governments (16 voivodships areas). The most essential purpose is to determine whether the assumptions of the subsidy program are working correctly in practice, and what are the real effects of introducing legislative changes into local government levels in the context of public health tasks. The assumption of the study was that the use of a new motivation tool in the field of public management would result in multiplication of resources invested in the provision of health policy programs. Preliminary conclusions show that financial expenditures changed significantly after the introduction of public funding at the level of 40%, obtaining an increase in funding from own funds of local governments at the level of 80 to 90%.

Keywords: health care system, health policy programs, local self-governments, public health management

Procedia PDF Downloads 150
44 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 280
43 Choice Analysis of Ground Access to São Paulo/Guarulhos International Airport Using Adaptive Choice-Based Conjoint Analysis (ACBC)

Authors: Carolina Silva Ansélmo

Abstract:

Airports are demand-generating poles that affect the flow of traffic around them. The airport access system must be fast, convenient, and adequately planned, considering its potential users. An airport with good ground access conditions can provide the user with a more satisfactory access experience. When several transport options are available, service providers must understand users' preferences and the expected quality of service. The present study focuses on airport access in a comparative scenario between bus, private vehicle, subway, taxi and urban mobility transport applications to São Paulo/Guarulhos International Airport. The objectives are (i) to identify the factors that influence the choice, (ii) to measure Willingness to Pay (WTP), and (iii) to estimate the market share for each modal. The applied method was Adaptive Choice-based Conjoint Analysis (ACBC) technique using Sawtooth Software. Conjoint analysis, rooted in Utility Theory, is a survey technique that quantifies the customer's perceived utility when choosing alternatives. Assessing user preferences provides insights into their priorities for product or service attributes. An additional advantage of conjoint analysis is its requirement for a smaller sample size compared to other methods. Furthermore, ACBC provides valuable insights into consumers' preferences, willingness to pay, and market dynamics, aiding strategic decision-making to provide a better customer experience, pricing, and market segmentation. In the present research, the ACBC questionnaire had the following variables: (i) access time to the boarding point, (ii) comfort in the vehicle, (iii) number of travelers together, (iv) price, (v) supply power, and (vi) type of vehicle. The case study questionnaire reached 213 valid responses considering the scenario of access from the São Paulo city center to São Paulo/Guarulhos International Airport. As a result, the price and the number of travelers are the most relevant attributes for the sample when choosing airport access. The market share of the selection is mainly urban mobility transport applications, followed by buses, private vehicles, taxis and subways.

Keywords: adaptive choice-based conjoint analysis, ground access to airport, market share, willingness to pay

Procedia PDF Downloads 71
42 Training for Search and Rescue Teams: Online Training for SAR Teams to Locate Lost Persons with Dementia Using Drones

Authors: Dalia Hanna, Alexander Ferworn

Abstract:

This research provides detailed proposed training modules for the public safety teams and, specifically, SAR teams responsible for search and rescue operations related to finding lost persons with dementia. Finding a lost person alive is the goal of this training. Time matters if a lost person is to be found alive. Finding lost people living with dementia is quite challenging, as they are unaware they are lost and will not seek help. Even a small contribution to SAR operations could contribute to saving a life. SAR operations will always require expert professional and human volunteers. However, we can reduce their time, save lives, and reduce costs by providing practical training that is based on real-life scenarios. The content for the proposed training is based on the research work done by the researcher in this area. This research has demonstrated that, based on utilizing drones, the algorithmic approach could support a successful search outcome. Understanding the behavior of the lost person, learning where they may be found, predicting their survivability, and automating the search are all contributions of this work, founded in theory and demonstrated in practice. In crisis management, human behavior constitutes a vital aspect in responding to the crisis; the speed and efficiency of the response often get affected by the difficulty of the context of the operation. Therefore, training in this area plays a significant role in preparing the crisis manager to manage the emotional aspects that lead to decision-making in these critical situations. Since it is crucial to gain high-level strategic choices and the ability to apply crisis management procedures, simulation exercises become central in training crisis managers to gain the needed skills to respond critically to these events. The training will enhance the responders’ ability to make decisions and anticipate possible consequences of their actions through flexible and revolutionary reasoning in responding to the crisis efficiently and quickly. As adult learners, search and rescue teams will be approaching training and learning by taking responsibility of the learning process, appreciate flexible learning and as contributors to the teaching and learning happening during that training. These are all characteristics of adult learning theories. The learner self-reflects, gathers information, collaborates with others and is self-directed. One of the learning strategies associated with adult learning is effective elaboration. It helps learners to remember information in the long term and use it in situations where it might be appropriate. It is also a strategy that can be taught easily and used with learners of different ages. Designers must design reflective activities to improve the student’s intrapersonal awareness.

Keywords: training, OER, dementia, drones, search and rescue, adult learning, UDL, instructional design

Procedia PDF Downloads 100
41 Techno Commercial Aspects of Using LPG as an Alternative Energy Solution for Transport and Industrial Sector in Bangladesh: Case Studies in Industrial Sector

Authors: Mahadehe Hassan

Abstract:

Transport system and industries which are the main basis of industrial and socio-economic development of any country. It is mainly dependent on fossil fuels. Bangladesh has fossil fuel reserves of 9.51 TCF as of July 2023, and if no new gas fields are discovered in the next 7-9 years and if the existing gas consumption rate continues, the fossil fuel reserves will be exhausted. The demand for petroleum products in Bangladesh is increasing steadily, with 63% imported by BPC and 37% imported by private companies. 61.61% of BPC imported products are used in the transport sector and 5.49% in the industrial sector, which is expensive and harmful to the environment. Liquefied Petroleum Gas (LPG) should be considered as an alternative energy for Bangladesh based on Sustainable Development Goals (SDGs) criteria for sustainable, clean and affordable energy. This will not only lead to the much desired mitigation of energy famine in the country but also contribute favorably to the macroeconomic indicators. Considering the environmental and economic issues, the government has referred to CNG (compressed natural gas) as the fuel carrier since 2000, but currently due to the decline mode of gas reserves, the government of Bangladesh is thinking of new energy sources for transport and industrial sectors which will be sustainable, environmentally friendly and economically viable. Liquefied Petroleum Gas (LPG) is the best choice for fueling transport and industrial sectors in Bangladesh. At present, a total of 1.54 million metric tons of liquefied petroleum gas (LPG) is marketed in Bangladesh by the public and private sectors. 83% of it is used by households, 12% by industry and commerce and 5% by transportation. Industrial and transport sector consumption is negligible compared to household consumption. So the purpose of the research is to find out the challenges of LPG market development in transport and industrial sectors in Bangladesh and make recommendations to reduce the challenges. Secure supply chain, inadequate infrastructure, insufficient investment, lack of government monitoring and consumer awareness in the transport sector and industrial sector are major challenges for LPG market development in Bangladesh. Bangladesh government as well as private owners should come forward in the development of liquefied petroleum gas (LPG) industry to reduce the challenges of secure energy sector for sustainable development. Furthermore, ensuring adequate Liquefied Petroleum Gas (LPG) supply in Bangladesh requires government regulations, infrastructure improvements in port areas, awareness raising and most importantly proper pricing of Liquefied Petroleum Gas (LPG) to address the energy crisis in Bangladesh.

Keywords: transportand industries fuel, LPG consumption, challenges, economical sustainability

Procedia PDF Downloads 78
40 The Optimum Biodiesel Blend in Low Sulfur Diesel and Its Physico-Chemical Properties and Economic Aspect

Authors: Ketsada Sutthiumporn, Sittichot Thongkaw, Malee Santikunaporn

Abstract:

In Thailand, biodiesel has been utilized as an attractive substitute of petroleum diesel and the government imposes a mandatory biodiesel blending requirement in transport sector to improve energy security, support agricultural sector and reduce emissions. Though biodiesel blend has many advantages over diesel fuel such as improved lubricity, low sulfur content and higher flash point, there are still some technical problems such as oxidative stability, poor cold- flow properties and impurity. Such problems were related to the fatty acid composition in feedstock. Moreover, Thailand has announced the use of low sulfur diesel as a base diesel and will be continually upgrading to EURO 5 in 2023. With ultra low sulfur content, it may affect the diesel fuel properties especially lubricity as well. Therefore, in this study, the physical and chemical properties of palm oil-based biodiesel in low sulfur diesel blends from different producers will be investigated by standard methods per ASTM and EN. Also, its economic benefits based on diesel price structure in Thailand will be highlighted. The appropriate biodiesel blend ratio can affect the physico-chemical properties and reasonable price in the country. Properties of biodiesel, including specific gravity, kinematic viscosity, FAME composition, flash point, sulfur, water, oxidation stability and lubricity were measured by standard methods of ASTM and EN. The results show that the FAME composition of biodiesel has the fatty acid of C12:0 to C20:1, mostly in C16:0, C18:0, C18:1, and C18:2, which were main characteristic compositions of palm biodiesel. The physical and chemical properties of biodiesel blended diesel was found to be increases with an increasing amount of biodiesel such as specific gravity, flash point and kinematic viscosity while sulfur value was decreased. Moreover, in this study, the various properties of each biodiesel blends were plotted to determine the appropriate proportional range of biodiesel-blended diesel with an optimum fuel price.It can be seen that the amount of B100 can be filled from 1% up to 7% in which the quality was in accordance with Notification of the department of Energy business.The understanding of relation between physico-chemical properties of palm oil-based biodiesel and pricing is beneficial to guide the better development of desired feedstock in Thailand and to implement biodiesel blends with comparative price and diesel engine performance.

Keywords: fatty acid methyl ester, biodiesel, fuel price structure, palm oil in Thailand

Procedia PDF Downloads 109
39 Liquid Illumination: Fabricating Images of Fashion and Architecture

Authors: Sue Hershberger Yoder, Jon Yoder

Abstract:

“The appearance does not hide the essence, it reveals it; it is the essence.”—Jean-Paul Sartre, Being and Nothingness Three decades ago, transarchitect Marcos Novak developed an early form of algorithmic animation he called “liquid architecture.” In that project, digitally floating forms morphed seamlessly in cyberspace without claiming to evolve or improve. Change itself was seen as inevitable. And although some imagistic moments certainly stood out, none was hierarchically privileged over another. That project challenged longstanding assumptions about creativity and artistic genius by posing infinite parametric possibilities as inviting alternatives to traditional notions of stability, originality, and evolution. Through ephemeral processes of printing, milling, and projecting, the exhibition “Liquid Illumination” destabilizes the solid foundations of fashion and architecture. The installation is neither worn nor built in the conventional sense, but—like the sensual art forms of fashion and architecture—it is still radically embodied through the logics and techniques of design. Appearances are everything. Surface pattern and color are no longer understood as minor afterthoughts or vapid carriers of dubious content. Here, they become essential but ever-changing aspects of precisely fabricated images. Fourteen silk “colorways” (a term from the fashion industry) are framed selections from ongoing experiments with intricate pattern and complex color configurations. Whether these images are printed on fabric, milled in foam, or illuminated through projection, they explore and celebrate the untapped potentials of the surficial and superficial. Some components of individual prints appear to float in front of others through stereoscopic superimpositions; some figures appear to melt into others due to subtle changes in hue without corresponding changes in value; and some layers appear to vibrate via moiré effects that emerge from unexpected pattern and color combinations. The liturgical atmosphere of Liquid Illumination is intended to acknowledge that, like the simultaneously sacred and superficial qualities of rose windows and illuminated manuscripts, artistic and religious ideologies are also always malleable. The intellectual provocation of this paper pushes the boundaries of current thinking concerning viable applications for fashion print designs and architectural images—challenging traditional boundaries between fine art and design. The opportunistic installation of digital printing, CNC milling, and video projection mapping in a gallery that is normally reserved for fine art exhibitions raises important questions about cultural/commercial display, mass customization, digital reproduction, and the increasing prominence of surface effects (color, texture, pattern, reflection, saturation, etc.) across a range of artistic practices and design disciplines.

Keywords: fashion, print design, architecture, projection mapping, image, fabrication

Procedia PDF Downloads 86
38 Nurturing Students' Creativity through Engagement in Problem Posing and Self-Assessment of Its Development

Authors: Atara Shriki, Ilana Lavy

Abstract:

In a rapidly changing technological society, creativity is considered as an engine of economic and social progress. No doubt the education system has a central role in nurturing all students’ creativity, however, it is normally not encouraged at school. The causes of this reality are related to a variety of circumstances, among them: external pressures to cover the curriculum and succeed in standardized tests that mostly require algorithmic thinking and implementation of rules; teachers’ tendency to teach similarly to the way they themselves were taught as school students; relating creativity to giftedness, and therefore avoid nurturing all students' creativity; lack of adequate learning materials and accessible tools for following and evaluating the development of students’ creativity; and more. Since success in academic studies requires, among other things, creativity, lecturers in higher education institutions should consider appropriate ways to nurture students’ creative thinking and assess its development. Obviously, creativity has a multifaceted nature, numerous definitions, various perspectives for studying its essence (e.g., process, personality, environment, and product), and several approaches aimed at evaluating and assessing creative expressions (e.g., cognitive, social-personal, and psychometric). In this framework, we suggest nurturing students’ creativity through engaging them in problem posing activities that are part of inquiry assignments. In order to assess the development of their creativity, we propose to employ a model that was designed for this purpose, based on the psychometric approach, viewing the posed problems as the “creative product”. The model considers four measurable aspects- fluency, flexibility, originality, and organization, as well as a total score of creativity that reflects the relative weights of each aspect. The scores given to learners are of two types: (1) Total scores- the absolute number of posed problems with respect to each of the four aspects, and a final score of creativity; (2) Relative scores- each absolute number is transformed into a number that relates to the relative infrequency of the posed problems in student’s reference group. Through converting the scores received over time into a graphical display, students can assess their progress both with respect to themselves and relative to their reference group. Course lecturers can get a picture of the strengths and weaknesses of each student as well as the class as a whole, and to track changes that occur over time in response to the learning environment they had generated. Such tracking may assist lecturers in making pedagogical decisions about emphases that should be put on one or more aspects of creativity, and about the students that should be given a special attention. Our experience indicates that schoolteachers and lecturers in higher education institutes find the combination of engaging learners in problem posing along with self-assessment of their progress through utilizing the graphical display of accumulating total and relative scores has the potential to realize most learners’ creative potential.

Keywords: creativity, problem posing, psychometric model, self-assessment

Procedia PDF Downloads 312
37 Assessing the Effectiveness of Warehousing Facility Management: The Case of Mantrac Ghana Limited

Authors: Kuhorfah Emmanuel Mawuli

Abstract:

Generally, for firms to enhance their operational efficiency of logistics, it is imperative to assess the logistics function. The cost of logistics conventionally represents a key consideration in the pricing decisions of firms, which suggests that cost efficiency in logistics can go a long way to improve margins. Warehousing, which is a key part of logistics operations, has the prospect of influencing operational efficiency in logistics management as well as customer value, but this potential has often not been recognized. It has been found that there is a paucity of research that evaluates the efficiency of warehouses. Indeed, limited research has been conducted to examine potential barriers to effective warehousing management. Due to this paucity of research, there is limited knowledge on how to address the obstacles associated with warehousing management. In order for warehousing management to become profitable, there is the need to integrate, balance, and manage the economic inputs and outputs of the entire warehouse operations, something that many firms tend to ignore. Management of warehousing is not solely related to storage functions. Instead, effective warehousing management requires such practices as maximum possible mechanization and automation of operations, optimal use of space and capacity of storage facilities, organization through "continuous flow" of goods, a planned system of storage operations, and safety of goods. For example, there is an important need for space utilization of the warehouse surface as it is a good way to evaluate the storing operation and pick items per hour. In the setting of Mantrac Ghana, not much knowledge regarding the management of the warehouses exists. The researcher has personally observed many gaps in the management of the warehouse facilities in the case organization Mantrac Ghana. It is important, therefore, to assess the warehouse facility management of the case company with the objective of identifying weaknesses for improvement. The study employs an in-depth qualitative research approach using interviews as a mode of data collection. Respondents in the study mainly comprised warehouse facility managers in the studied company. A total of 10 participants were selected for the study using a purposive sampling strategy. Results emanating from the study demonstrate limited warehousing effectiveness in the case company. Findings further reveal that the major barriers to effective warehousing facility management comprise poor layout, poor picking optimization, labour costs, and inaccurate orders; policy implications of the study findings are finally outlined.

Keywords: assessing, warehousing, facility, management

Procedia PDF Downloads 59
36 Modelling Agricultural Commodity Price Volatility with Markov-Switching Regression, Single Regime GARCH and Markov-Switching GARCH Models: Empirical Evidence from South Africa

Authors: Yegnanew A. Shiferaw

Abstract:

Background: commodity price volatility originating from excessive commodity price fluctuation has been a global problem especially after the recent financial crises. Volatility is a measure of risk or uncertainty in financial analysis. It plays a vital role in risk management, portfolio management, and pricing equity. Objectives: the core objective of this paper is to examine the relationship between the prices of agricultural commodities with oil price, gas price, coal price and exchange rate (USD/Rand). In addition, the paper tries to fit an appropriate model that best describes the log return price volatility and estimate Value-at-Risk and expected shortfall. Data and methods: the data used in this study are the daily returns of agricultural commodity prices from 02 January 2007 to 31st October 2016. The data sets consists of the daily returns of agricultural commodity prices namely: white maize, yellow maize, wheat, sunflower, soya, corn, and sorghum. The paper applies the three-state Markov-switching (MS) regression, the standard single-regime GARCH and the two regime Markov-switching GARCH (MS-GARCH) models. Results: to choose the best fit model, the log-likelihood function, Akaike information criterion (AIC), Bayesian information criterion (BIC) and deviance information criterion (DIC) are employed under three distributions for innovations. The results indicate that: (i) the price of agricultural commodities was found to be significantly associated with the price of coal, price of natural gas, price of oil and exchange rate, (ii) for all agricultural commodities except sunflower, k=3 had higher log-likelihood values and lower AIC and BIC values. Thus, the three-state MS regression model outperformed the two-state MS regression model (iii) MS-GARCH(1,1) with generalized error distribution (ged) innovation performs best for white maize and yellow maize; MS-GARCH(1,1) with student-t distribution (std) innovation performs better for sorghum; MS-gjrGARCH(1,1) with ged innovation performs better for wheat, sunflower and soya and MS-GARCH(1,1) with std innovation performs better for corn. In conclusion, this paper provided a practical guide for modelling agricultural commodity prices by MS regression and MS-GARCH processes. This paper can be good as a reference when facing modelling agricultural commodity price problems.

Keywords: commodity prices, MS-GARCH model, MS regression model, South Africa, volatility

Procedia PDF Downloads 197
35 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine

Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez

Abstract:

An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.

Keywords: blade, dynamic, fsi, wind turbine

Procedia PDF Downloads 475
34 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 104
33 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles

Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.

Keywords: mobile mapping, GNSS, IMU, similarity, classification

Procedia PDF Downloads 77
32 Racial Distress in the Digital Age: A Mixed-Methods Exploration of the Effects of Social Media Exposure to Police Brutality on Black Students

Authors: Amanda M. McLeroy, Tiera Tanksley

Abstract:

The 2020 movement for Black Lives, ignited by anti-Black police brutality and exemplified by the public execution of George Floyd, underscored the dual potential of social media for political activism and perilous exposure to traumatic content for Black students. This study employs Critical Race Technology Theory (CRTT) to scrutinize algorithmic anti-blackness and its impact on Black youth's lives and educational experiences. The research investigates the consequences of vicarious exposure to police brutality on social media among Black adolescents through qualitative interviews and quantitative scale data. The findings reveal an unprecedented surge in exposure to viral police killings since 2020, resulting in profound physical, socioemotional, and educational effects on Black youth. CRTT forms the theoretical basis, challenging the notion of digital technologies as post-racial and neutral, aiming to dismantle systemic biases within digital systems. Black youth, averaging over 13 hours of daily social media use, face constant exposure to graphic images of Black individuals dying. The study connects this exposure to a range of physical, socioemotional, and mental health consequences, emphasizing the urgent need for understanding and support. The research proposes questions to explore the extent of police brutality exposure and its effects on Black youth. Qualitative interviews with high school and college students and quantitative scale data from undergraduates contribute to a nuanced understanding of the impact of police brutality exposure on Black youth. Themes of unprecedented exposure to viral police killings, physical and socioemotional effects, and educational consequences emerge from the analysis. The study uncovers how vicarious experiences of negative police encounters via social media lead to mistrust, fear, and psychosomatic symptoms among Black adolescents. Implications for educators and counselors are profound, emphasizing the cultivation of empathy, provision of mental health support, integration of media literacy education, and encouragement of activism. Recognizing family and community influences is crucial for comprehensive support. Professional development opportunities in culturally responsive teaching and trauma-informed approaches are recommended for educators. In conclusion, creating a supportive educational environment that addresses the emotional impact of social media exposure to police brutality is crucial for the well-being and development of Black adolescents. Counselors, through safe spaces and collaboration, play a vital role in supporting Black youth facing the distressing effects of social media exposure to police brutality.

Keywords: black youth, mental health, police brutality, social media

Procedia PDF Downloads 45
31 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 106
30 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 130
29 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics

Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki

Abstract:

The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.

Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio

Procedia PDF Downloads 167
28 Biological Institute Actions for Bovine Mastitis Monitoring in Low Income Dairy Farms, Brazil: Preliminary Data

Authors: Vanessa Castro, Liria H. Okuda, Daniela P. Chiebao, Adriana H. C. N. Romaldini, Harumi Hojo, Marina Grandi, Joao Paulo A. Silva, Alessandra F. C. Nassar

Abstract:

The Biological Institute of Sao Paulo, in partnership with a private company, develops an Animal Health Family Farming Program (Prosaf) to enable communication among smallholder farmers and scientists, on-farm consulting and lectures, solving health questions that will benefit agricultural productivity. In Vale do Paraiba region, a dairy region of Sao Paulo State, southern Brazil, many of these types of farms are found with several milk quality problems. Most of these farms are profit-based business; however, with non-technified cattle rearing systems and uncertain veterinary assistance. Feedback from Prosaf showed that the biggest complaints from farmers were low milk production, sick animals and, mainly, loss of selling price due to a high somatic cell count (SCC) and a total bacterial count (TBC). The aims of this study were to improve milk quality, animal hygiene and herd health status by adjustments into general management practices and introducing techniques of sanitary control and milk monitoring in five dairy farms from Sao Jose do Barreiro municipality, Sao Paulo State, Brazil, to increase their profits. A total of 119 milk samples from 56 animals positive for California Mastitis Test (CMT) were collected. The positive CMT indicates subclinical mastitis, therefore laboratorial exams were performed in the milk (microbiological, biochemical and antibiogram test) detect the presence of Staphylococcus aureus (41.8%), Bacillus sp. (11.8%), Streptococcus sp. (2.1%), nonfermenting, motile and oxidase-negative Gram-negative Bacilli (2.1%) and Enterobacter (2.1%). Antibiograms revealed high resistance to gentamicin and streptomycin, probably due to indiscriminate use of antibiotics without veterinarian prescription. We suggested the improvement of hygiene management in the complete milking and cooling tanks system. Using the results of the laboratory tests, animals were properly treated, and the effects observed were better CMT outcomes, lower SCCs, and TBCs leading to an increase in milk pricing. This study will have a positive impact on the family farmers from Sao Paulo State dairy region by improving their market milk competitiveness.

Keywords: milk, family farming, food quality, antibiogram, profitability

Procedia PDF Downloads 145
27 Findings: Impact of a Sustained Health Promoting Workplace on Stock Price Performance and Beta; A Singapore Case

Authors: Wee Tong Liaw, Elaine Wong Yee Sing

Abstract:

The main objective and focus of this study are to establish the significance of a sustained health promoting workplace on stock and portfolio returns focusing on companies listed on the Singapore stock exchange, using a two-factor model comprising of the single factor CAPM and a 'health promoting workplace' factor. The 'health promoting workplace' factor represents the excess returns derived between two portfolios of component stocks that, when combined, would represent a top tier stock market index in Singapore, namely the STI index. The first portfolio represents companies that are independently assessed by the Singapore’s Health Award, SHA, to have a sustained and comprehensive health promoting workplace (SHA-STI portfolio) and the second portfolio represents companies that had not been independently assessed (Non-SHA STI portfolio). Since 2001, many companies in Singapore have voluntarily participated in the bi-annual Singapore HEALTH Award initiated by the Health Promotion Board of Singapore (HPB). The Singapore HEALTH Award (SHA), is an industry-wide award and assessment process. SHA assesses and recognizes employers in Singapore for implementing a comprehensive and sustainable health promotion programme at their workplaces. When using a ten year holding period instead of a one year holding period, excess returns in the SHA-STI portfolio over Non-SHA STI portfolio were consistently being observed over all test periods, during 2001 to 2013. In addition, when applied to the SHA-STI portfolio, results from the Two Factor Model consistently revealed higher explanatory powers across all test periods for the portfolio as well as all the individual component stocks in SHA-STI portfolio, than the single factor CAPM model. However, with respect to attaining higher level of achievement in the Singapore Health Award, this study did not show any incentive for selecting listed companies that have achieved a higher level of award. Results from this study would give further insights to investors and fund managers alike who intend to consider health promoting workplace as a risk factor in their stock or portfolio selection process, in particular for investors who have a preference for STI’s component stocks and with a longer investment horizon. Key micro factors like management abilities, business development strategies and production capabilities that meet the needs of market would create the demand for a company’s product(s) or service(s) and consequently contribute to its top line and profitability. Thereafter, the existence of a sustainable health promoting workplace would be a key catalytic factor in sustaining a productive workforce needed to support the continued success of a profitable business.

Keywords: asset pricing model, company's performance, stock returns, financial risk factor, sustained health promoting workplace

Procedia PDF Downloads 164
26 Polarization as a Proxy of Misinformation Spreading

Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo

Abstract:

Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.

Keywords: information spreading, misinformation, narratives, online social networks, polarization

Procedia PDF Downloads 284
25 The Financial Impact of Covid 19 on the Hospitality Industry in New Zealand

Authors: Kay Fielden, Eelin Tan, Lan Nguyen

Abstract:

In this research project, data was gathered at a Covid 19 Conference held in June 2021 from industry leaders who discussed the impact of the global pandemic on the status of the New Zealand hospitality industry. Panel discussions on financials, human resources, health and safety, and recovery were conducted. The themes explored for the finance panel were customer demographics, hospitality sectors, financial practices, government impact, and cost of compliance. The aim was to see how the hospitality industry has responded to the global pandemic and the steps that have been taken for the industry to recover or sustain their business. The main research question for this qualitative study is: What are the factors that have impacted on finance for the hospitality industry in New Zealand due to Covid 19? For financials, literature has been gathered to study global effects, and this is being compared with the data gathered from the discussion panel through the lens of resilience theory. Resilience theory applied to the hospitality industry suggests that the challenges imposed by Covid 19 have been the catalyst for government initiatives, technical innovation, engaging local communities, and boosting confidence. Transformation arising from these ground shifts have been a move towards sustainability, wellbeing, more awareness of climate change, and community engagement. Initial findings suggest that there has been a shift in customer base that has prompted regional accommodation providers to realign offers and to become more flexible to attract and maintain this realigned customer base. Dynamic pricing structures have been required to meet changing customer demographics. Flexible staffing arrangements include sharing staff between different accommodation providers, owners with multiple properties adopting different staffing arrangements, maintaining a good working relationship with the bank, and conserving cash. Uncertain times necessitate changing revenue strategies to cope with external factors. Financial support offered by the government has cushioned the financial downturn for many in the hospitality industry, and managed isolation and quarantine (MIQ) arrangements have offered immediate financial relief for those hotels involved. However, there is concern over the long-term effects. Compliance with mandated health and safety requirements has meant that the hospitality industry has streamlined its approach to meeting those requirements and has invested in customer relations to keep paying customers informed of the health measures in place. Initial findings from this study lie within the resilience theory framework and are consistent with findings from the literature.

Keywords: global pandemic, hospitality industry, new Zealand, resilience

Procedia PDF Downloads 94
24 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 394
23 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 70
22 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.

Keywords: psychometric, parametric, domains, grids, therapists

Procedia PDF Downloads 274
21 Research of the Factors Affecting the Administrative Capacity of Enterprises in the Logistic Sector of Bulgaria

Authors: R. Kenova, K. Anguelov, R. Nikolova

Abstract:

The human factor plays a major role in boosting the competitive capacity of logistic enterprises. This is of particular importance when it comes to logistic companies. On the one hand they should be strictly compliant with legislation; on the other hand, they should be competitive in terms of pricing and of delivery timelines. Moreover, their policies should allow them to be as flexible as possible. All these circumstances are reason for very serious challenges for the qualification, motivation and experience of the human resources, working in logistic companies or in logistic departments of trade and industrial enterprises. The geographic place of Bulgaria puts it in position of a country with some specific competitive advantages in the goods transport from Europe to Asia and back. Along with it, there is a number of logistic companies, that operate in this sphere in Bulgaria. In the current paper, the authors aim to establish the condition of the administrative capacity and human resources in the logistic companies and logistic departments of trade and industrial companies in Bulgaria in order to propose some guidelines for improving of their effectiveness. Due to independent empirical research, conducted in Bulgarian logistic, trade and industrial enterprises, the authors investigate both the impact degree and the interdependence of various factors that characterize the administrative capacity. The study is conducted with a prepared questionnaire, in format of direct interview with the respondents. The volume of the poll is 50 respondents, representatives of: general managers of industrial or trade enterprises; logistic managers of industrial or trade enterprises; general managers of forwarding companies – either with own or with hired transport; experts from Bulgarian association of logistics; logistic lobbyist and scientists of the relevant area. The data are gathered for 3 months, then arranged by a specialized software program and analyzed by preset criteria. Based on the results of this methodological toolbox, it can be claimed that there is a correlation between the individual criteria. Also, a commitment between the administrative capacity and other factors that determine the competitiveness of the studied companies is established. In this paper, the authors present results of the empirical research that concerns the number and the workload in the logistic departments of the enterprises. Also, what is commented is the experience, related to logistic processes management and human resources competence. Moreover, the overload level of the logistic specialists is analyzed as one of the main threats for making mistakes and losing clients. The paper stands behind the thesis that there is indispensability of forming an effective and efficient administrative capacity, based on the number, qualification, experience and motivation of the staff in the logistic companies. The paper ends with recommendations about the qualification and experience of the specialists in logistic departments; providing effective and efficient administrative capacity in the logistic departments; interdependence of the human factor and the other factors that influence the enterprise competitiveness.

Keywords: administrative capacity, human resources, logistic competitiveness, staff qualification

Procedia PDF Downloads 148
20 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface

Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis

Abstract:

In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.

Keywords: polymorphic, locomotion, sinusoidal curves, parametric

Procedia PDF Downloads 100