Search results for: factor models
10143 Social Business Models: When Profits and Impacts Are Not at Odds
Authors: Elisa Pautasso, Matteo Castagno, Michele Osella
Abstract:
In the last decade, the emergence of new social needs as an effect of the economic crisis has stimulated the flourishing of business endeavours characterised by explicit social goals. Social start-ups, social enterprises or Corporate Social Responsibility operations carried out by traditional companies are quintessential examples in this regard. This paper analyses these kinds of initiatives in order to discover the main characteristics of social business models and to provide insights to social entrepreneurs for developing or improving their strategies. The research is conducted through the integration of literature review and case study analysis and, thanks to the recognition of the importance of both profits and social impacts as the key success factors for a social business model, proposes a framework for identifying indicators suitable for measuring the social impacts generated.Keywords: business model, case study, impacts, social business
Procedia PDF Downloads 34910142 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System
Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich
Abstract:
The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.Keywords: automated vehicle, driver behavior, human factors, human-machine system
Procedia PDF Downloads 14610141 Influence of Percentage and Melting Temperature of Phase Change Material on the Thermal Behavior of a Hollow-Brick
Authors: Zakaria Aketouane, Mustapha Malha, Abdellah Bah, Omar Ansari, Mohamed Asbik
Abstract:
The present paper deals with the thermal performance of a hollow-brick filled with Phase Change Material (PCM). The main objective is to study the effect of percentage and melting temperature of the PCM on the thermal inertia and internal surface temperature of the hollow-brick. A numerical model based on the heat transfer equation and the apparent heat capacity method has been validated using experimental study from the literature. The results show that increasing the percentage of the PCM has a significant effect on time lag and decrement factor that define the thermal inertia; the internal temperature is reduced by 1.36°C to 5.39°C for a percentage from 11% to 71% in comparison to a brick without PCM. In addition, an appropriate melting temperature of 37°C has been deduced for the horizontal wall orientation in Rabat in comparison to 27°C and 47°C.Keywords: appropriate melting temperature, decrement factor, phase change material, thermal inertia, time lag
Procedia PDF Downloads 23510140 Evaluating the Factors Controlling the Hydrochemistry of Gaza Coastal Aquifer Using Hydrochemical and Multivariate Statistical Analysis
Authors: Madhat Abu Al-Naeem, Ismail Yusoff, Ng Tham Fatt, Yatimah Alias
Abstract:
Groundwater in Gaza strip is increasingly being exposed to anthropic and natural factors that seriously impacted the groundwater quality. Physiochemical data of groundwater can offer important information on changes in groundwater quality that can be useful in improving water management tactics. An integrative hydrochemical and statistical techniques (Hierarchical cluster analysis (HCA) and factor analysis (FA)) have been applied on the existence ten physiochemical data of 84 samples collected in (2000/2001) using STATA, AquaChem, and Surfer softwares to: 1) Provide valuable insight into the salinization sources and the hydrochemical processes controlling the chemistry of groundwater. 2) Differentiate the influence of natural processes and man-made activities. The recorded large diversity in water facies with dominance Na-Cl type that reveals a highly saline aquifer impacted by multiple complex hydrochemical processes. Based on WHO standards, only (15.5%) of the wells were suitable for drinking. HCA yielded three clusters. Cluster 1 is the highest in salinity, mainly due to the impact of Eocene saline water invasion mixed with human inputs. Cluster 2 is the lowest in salinity also due to Eocene saline water invasion but mixed with recent rainfall recharge and limited carbonate dissolution and nitrate pollution. Cluster 3 is similar in salinity to Cluster 2, but with a high diversity of facies due to the impact of many sources of salinity as sea water invasion, carbonate dissolution and human inputs. Factor analysis yielded two factors accounting for 88% of the total variance. Factor 1 (59%) is a salinization factor demonstrating the mixing contribution of natural saline water with human inputs. Factor 2 measure the hardness and pollution which explained 29% of the total variance. The negative relationship between the NO3- and pH may reveal a denitrification process in a heavy polluted aquifer recharged by a limited oxygenated rainfall. Multivariate statistical analysis combined with hydrochemical analysis indicate that the main factors controlling groundwater chemistry were Eocene saline invasion, seawater invasion, sewage invasion and rainfall recharge and the main hydrochemical processes were base ion and reverse ion exchange processes with clay minerals (water rock interactions), nitrification, carbonate dissolution and a limited denitrification process.Keywords: dendrogram and cluster analysis, water facies, Eocene saline invasion and sea water invasion, nitrification and denitrification
Procedia PDF Downloads 36510139 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan
Authors: Souad Romdhane, Lotfi Belkacem
Abstract:
When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study
Procedia PDF Downloads 35910138 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method
Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat
Abstract:
Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.Keywords: electric discharge machining (EDM), modeling, optimization, CCRD
Procedia PDF Downloads 34110137 Deficits and Solutions in the Development of Modular Factory Systems
Authors: Achim Kampker, Peter Burggräf, Moritz Krunke, Hanno Voet
Abstract:
As a reaction to current challenges in factory planning, many companies think about introducing factory standards to lower planning times and decrease planning costs. If these factory standards are set-up with a high level of modularity, they are defined as modular factory systems. This paper deals with the main current problems in the application of modular factory systems in practice and presents a solution approach with its basic models. The methodology is based on methods from factory planning but also uses the tools of other disciplines like product development or technology management to deal with the high complexity, which the development of modular factory systems implies. The four basic models that such a methodology has to contain are introduced and pointed out.Keywords: factory planning, modular factory systems, factory standards, cost-benefit analysis
Procedia PDF Downloads 59510136 Forecast Combination for Asset Classes: Insights on Market Efficiency and Arbitrage
Authors: Rodrigo Baggi Prieto Alvarez, Jorge Miguel Bravo
Abstract:
The Exchange-Traded Funds (ETFs) have transformed asset allocation, allowing investors to gain exposure to diverse asset classes with a single instrument. In turn, forecast combination models have emerged as advantageous methods for improving prediction accuracy. While the Efficient Market Hypothesis (EMH) posits that prices fluctuate randomly, making abnormal returns unattainable, empirical evidence reveals autocorrelation in stock returns, challenging the EMH's strict interpretation. This raises the question of whether econometric models, machine learning methods and forecast combinations can predict asset prices more effectively. Also, comparing forecasts with futures market prices may reveal potential arbitrage opportunities, offering insights into market inefficiencies. Using ETFs indices from January 1st, 2015, to September 30th, 2024, across equity markets (S&P 500, Russell 2000, MSCI Developed Markets and MSCI Emerging Markets), fixed income (7-10 Year Treasury Bond, Developed Markets Treasury Bond, Emerging Markets Treasury Bond and U.S. Corporate Bonds), commodity (Gold Shares ETF) and crypto (ProShares Bitcoin ETF), this paper tests the predictive accuracy of traditional econometric models (ARIMA, ETS), machine learning (SVM, Random Forest, XGBoost) and forecast combinations (ARIMA-SVR, ARIMA-ANN, Ridge Regression and LASSO). Preliminary results suggest that ensemble methods can indeed outperform simple models, indicating that combinations like the Ridge Regression and LASSO are superior to econometric and machine learning models individually. Also, prediction accuracy is better for fixed income ETFs, aligned with the lower volatility of these assets, while models show higher forecast error for crypto and equity ETFs. Finally, initial comparisons between forecasts and the futures market prices reveal potential inefficiencies, suggesting opportunities for spot-futures index arbitrage. Providing empirical evidence on the application of forecasting models to a significant group of financial assets, these findings contribute to discussions on market efficiency and highlight the role of ensemble methods in improving asset price predictability and portfolio management.Keywords: ETF, asset prediction, forecast combination, EMH, spot-futures index arbitrage
Procedia PDF Downloads 010135 Improving the Performance of Back-Propagation Training Algorithm by Using ANN
Authors: Vishnu Pratap Singh Kirar
Abstract:
Artificial Neural Network (ANN) can be trained using backpropagation (BP). It is the most widely used algorithm for supervised learning with multi-layered feed-forward networks. Efficient learning by the BP algorithm is required for many practical applications. The BP algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a two-term algorithm consisting of a learning rate (LR) and a momentum factor (MF). The major drawbacks of the two-term BP learning algorithm are the problems of local minima and slow convergence speeds, which limit the scope for real-time applications. Recently the addition of an extra term, called a proportional factor (PF), to the two-term BP algorithm was proposed. The third increases the speed of the BP algorithm. However, the PF term also reduces the convergence of the BP algorithm, and criteria for evaluating convergence are required to facilitate the application of the three terms BP algorithm. Although these two seem to be closely related, as described later, we summarize various improvements to overcome the drawbacks. Here we compare the different methods of convergence of the new three-term BP algorithm.Keywords: neural network, backpropagation, local minima, fast convergence rate
Procedia PDF Downloads 49810134 Experimental and Numerical Investigation on the Torque in a Small Gap Taylor-Couette Flow with Smooth and Grooved Surface
Authors: L. Joseph, B. Farid, F. Ravelet
Abstract:
Fundamental studies were performed on bifurcation, instabilities and turbulence in Taylor-Couette flow and applied to many engineering applications like astrophysics models in the accretion disks, shrouded fans, and electric motors. Such rotating machinery performances need to have a better understanding of the fluid flow distribution to quantify the power losses and the heat transfer distribution. The present investigation is focused on high gap ratio of Taylor-Couette flow with high rotational speeds, for smooth and grooved surfaces. So far, few works has been done in a very narrow gap and with very high rotation rates and, to the best of our knowledge, not with this combination with grooved surface. We study numerically the turbulent flow between two coaxial cylinders where R1 and R2 are the inner and outer radii respectively, where only the inner is rotating. The gap between the rotor and the stator varies between 0.5 and 2 mm, which corresponds to a radius ratio η = R1/R2 between 0.96 and 0.99 and an aspect ratio Γ= L/d between 50 and 200, where L is the length of the rotor and d being the gap between the two cylinders. The scaling of the torque with the Reynolds number is determined at different gaps for different smooth and grooved surfaces (and also with different number of grooves). The fluid in the gap is air. Re varies between 8000 and 30000. Another dimensionless parameter that plays an important role in the distinction of the regime of the flow is the Taylor number that corresponds to the ratio between the centrifugal forces and the viscous forces (from 6.7 X 105 to 4.2 X 107). The torque will be first evaluated with RANS and U-RANS models, and compared to empirical models and experimental results. A mesh convergence study has been done for each rotor-stator combination. The results of the torque are compared to different meshes in 2D dimensions. For the smooth surfaces, the models used overestimate the torque compared to the empirical equations that exist in the bibliography. The closest models to the empirical models are those solving the equations near to the wall. The greatest torque achieved with grooved surface. The tangential velocity in the gap was always higher in between the rotor and the stator and not on the wall of rotor. Also the greater one was in the groove in the recirculation zones. In order to avoid endwall effects, long cylinders are used in our setup (100 mm), torque is measured by a co-rotating torquemeter. The rotor is driven by an air turbine of an automotive turbo-compressor for high angular velocities. The results of the experimental measurements are at rotational speed of up to 50 000 rpm. The first experimental results are in agreement with numerical ones. Currently, quantitative study is performed on grooved surface, to determine the effect of number of grooves on the torque, experimentally and numerically.Keywords: Taylor-Couette flow, high gap ratio, grooved surface, high speed
Procedia PDF Downloads 40710133 Ancelim: Health System Restoration Protocol for Cancer Patients
Authors: Mark Berry
Abstract:
A number of studies have identified several factors involved in the malignant progression of cancer cells. The Primary modulator in driving inflammation to these transformed cells has been identified as the transcription factor known as nuclear factor-κB. This essential regulator of inflammation and the development of cancer, combined with a microenvironment of inflammation and signaling molecules, plays a major role in the malignant progression of cancer, and this progression is the result of the mutagenic predisposition of persistent substances that combat infection at tumor sites and other areas of chronic inflammation. Inflammation-induced tumors, and their inflammatory cells and regulators may be the primary source of metastasis of tumor cells through angiogenesis. Previous research on cytokines and chemokines, including their downstream targets, has been the focus of the cancer/inflammation connection. The identification of the biological mechanisms of other proteins vital to the inflammation cascade and their interactions are crucial to novel and effective therapeutic protocols for the treatment of inflammation-induced cancers. The Ancelim HSRP Protocol is just such a therapeutic intervention.Keywords: ancelim, cancer, inflammation, tumor
Procedia PDF Downloads 54510132 Enhancing Patch Time Series Transformer with Wavelet Transform for Improved Stock Prediction
Authors: Cheng-yu Hsieh, Bo Zhang, Ahmed Hambaba
Abstract:
Stock market prediction has long been an area of interest for both expert analysts and investors, driven by its complexity and the noisy, volatile conditions it operates under. This research examines the efficacy of combining the Patch Time Series Transformer (PatchTST) with wavelet transforms, specifically focusing on Haar and Daubechies wavelets, in forecasting the adjusted closing price of the S&P 500 index for the following day. By comparing the performance of the augmented PatchTST models with traditional predictive models such as Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Transformers, this study highlights significant enhancements in prediction accuracy. The integration of the Daubechies wavelet with PatchTST notably excels, surpassing other configurations and conventional models in terms of Mean Absolute Error (MAE) and Mean Squared Error (MSE). The success of the PatchTST model paired with Daubechies wavelet is attributed to its superior capability in extracting detailed signal information and eliminating irrelevant noise, thus proving to be an effective approach for financial time series forecasting.Keywords: deep learning, financial forecasting, stock market prediction, patch time series transformer, wavelet transform
Procedia PDF Downloads 5010131 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method
Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson
Abstract:
Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.Keywords: adversarial examples, attack, computer vision, image processing
Procedia PDF Downloads 19310130 Polymerization: An Alternative Technology for Heavy Metal Removal
Authors: M. S. Mahmoud
Abstract:
In this paper, the adsorption performance of a novel environmental friendly material, calcium alginate gel beads as a non-conventional technique for the successful removal of copper ions from aqueous solution are reported on. Batch equilibrium studies were carried out to evaluate the adsorption capacity and process parameters such as pH, adsorbent dosages, initial metal ion concentrations, stirring rates and contact times. It was observed that the optimum pH for maximum copper ions adsorption was at pH 5.0. For all contact times, an increase in copper ions concentration resulted in decrease in the percent of copper ions removal. Langmuir and Freundlich's isothermal models were used to describe the experimental adsorption. Adsorbent was characterization using Fourier transform-infrared (FT-IR) spectroscopy and Transmission electron microscopy (TEM).Keywords: adsorption, alginate polymer, isothermal models, equilibrium
Procedia PDF Downloads 44810129 New Moment Rotation Model of Single Web Angle Connections
Authors: Zhengyi Kong, Seung-Eock Kim
Abstract:
Single angle connections, which are bolted to the beam web and the column flange, are studied to investigate moment-rotation behavior. Elastic–perfectly plastic material behavior is assumed. ABAQUS software is used to analyze the nonlinear behavior of a single angle connection. The same geometric and material conditions with Yanglin Gong’s test are used for verifying finite element models. Since Kishi and Chen’s Power model and Lee and Moon’s Log model are accurate only for a limited range, simpler and more accurate hyperbolic function models are proposed. The equation for calculating rotation at ultimate moment is first proposed.Keywords: finite element method, moment and rotation, rotation at ultimate moment, single-web angle connections
Procedia PDF Downloads 43110128 Synthesis and Performance Adsorbent from Coconut Shells Polyetheretherketone for Natural Gas Storage
Authors: Umar Hayatu Sidik
Abstract:
The natural gas vehicle represents a cost-competitive, lower-emission alternative to the gasoline-fuelled vehicle. The immediate challenge that confronts natural gas is increasing its energy density. This paper addresses the question of energy density by reviewing the storage technologies for natural gas with improved adsorbent. Technical comparisons are made between storage systems containing adsorbent and conventional compressed natural gas based on the associated amount of moles contained with Compressed Natural Gas (CNG) and Adsorbed Natural Gas (ANG). We also compare gas storage in different cylinder types (1, 2, 3 and 4) based on weight factor and storage capacity. For the storage tank system, we discussed the concept of carbon adsorbents, when used in CNG tanks, offer a means of increasing onboard fuel storage and, thereby, increase the driving range of the vehicle. It confirms that the density of the stored gas in ANG is higher than that of compressed natural gas (CNG) operated at the same pressure. The obtained experimental data were correlated using linear regression analysis with common adsorption kinetic (Pseudo-first order and Pseudo-second order) and isotherm models (Sip and Toth). The pseudo-second-order kinetics describe the best fitness with a correlation coefficient of 9945 at 35 bar. For adsorption isotherms, the Sip model shows better fitness with the regression coefficient (R2) of 0.9982 and with the lowest RSMD value of 0.0148. The findings revealed the potential of adsorbent in natural gas storage applications.Keywords: natural gas, adsorbent, compressed natural gas, adsorption
Procedia PDF Downloads 6010127 Decision Support System for Diagnosis of Breast Cancer
Authors: Oluwaponmile D. Alao
Abstract:
In this paper, two models have been developed to ascertain the best network needed for diagnosis of breast cancer. Breast cancer has been a disease that required the attention of the medical practitioner. Experience has shown that misdiagnose of the disease has been a major challenge in the medical field. Therefore, designing a system with adequate performance for will help in making diagnosis of the disease faster and accurate. In this paper, two models: backpropagation neural network and support vector machine has been developed. The performance obtained is also compared with other previously obtained algorithms to ascertain the best algorithms.Keywords: breast cancer, data mining, neural network, support vector machine
Procedia PDF Downloads 34710126 Defining the Limits of No Load Test Parameters at Over Excitation to Ensure No Over-Fluxing of Core Based on a Case Study: A Perspective From Utilities
Authors: Pranjal Johri, Misbah Ul-Islam
Abstract:
Power Transformers are one of the most critical and failure prone entities in an electrical power system. It is an established practice that each design of a power transformer has to undergo numerous type tests for design validation and routine tests are performed on each and every power transformer before dispatch from manufacturer’s works. Different countries follow different standards for testing the transformers. Most common and widely followed standard for Power Transformers is IEC 60076 series. Though these standards put up a strict testing requirements for power transformers, however, few aspects of transformer characteristics and guaranteed parameters can be ensured by some additional tests. Based on certain observations during routine test of a transformer and analyzing the data of a large fleet of transformers, three propositions have been discussed and put forward to be included in test schedules and standards. The observations in the routine test raised questions on design flux density of transformer. In order to ensure that flux density in any part of the core & yoke does not exceed 1.9 tesla at 1.1 pu as well, following propositions need to be followed during testing: From the data studied, it was evident that generally NLC at 1.1 pu is apporx. 3 times of No Load Current at 1 pu voltage. During testing the power factor at 1.1 pu excitation, it must be comparable to calculated values from the Cold Rolled Grain Oriented steel material curves, including building factor. A limit of 3 % to be extended for higher than rated voltages on difference in Vavg and Vrms, during no load testing. Extended over excitation test to be done in case above propositions are observed to be violated during testing.Keywords: power transfoemrs, no load current, DGA, power factor
Procedia PDF Downloads 10410125 Possibilities to Evaluate the Climatic and Meteorological Potential for Viticulture in Poland: The Case Study of the Jagiellonian University Vineyard
Authors: Oskar Sekowski
Abstract:
Current global warming causes changes in the traditional zones of viticulture worldwide. During 20th century, the average global air temperature increased by 0.89˚C. The models of climate change indicate that viticulture, currently concentrating in narrow geographic niches, may move towards the poles, to higher geographic latitudes. Global warming may cause changes in traditional viticulture regions. Therefore, there is a need to estimate the climatic conditions and climate change in areas that are not traditionally associated with viticulture, e.g., Poland. The primary objective of this paper is to prepare methodology to evaluate the climatic and meteorological potential for viticulture in Poland based on a case study. Moreover, the additional aim is to evaluate the climatic potential of a mesoregion where a university vineyard is located. The daily data of temperature, precipitation, insolation, and wind speed (1988-2018) from the meteorological station located in Łazy, southern Poland, was used to evaluate 15 climatological parameters and indices connected with viticulture. The next steps of the methodology are based on Geographic Information System methods. The topographical factors such as a slope gradient and slope exposure were created using Digital Elevation Models. The spatial distribution of climatological elements was interpolated by ordinary kriging. The values of each factor and indices were also ranked and classified. The viticultural potential was determined by integrating two suitability maps, i.e., the topographical and climatic ones, and by calculating the average for each pixel. Data analysis shows significant changes in heat accumulation indices that are driven by increases in maximum temperature, mostly increasing number of days with Tmax > 30˚C. The climatic conditions of this mesoregion are sufficient for vitis vinifera viticulture. The values of indicators and insolation are similar to those in the known wine regions located on similar geographical latitudes in Europe. The smallest threat to viticulture in study area is the occurrence of hail and the highest occurrence of frost in the winter. This research provides the basis for evaluating general suitability and climatologic potential for viticulture in Poland. To characterize the climatic potential for viticulture, it is necessary to assess the suitability of all climatological and topographical factors that can influence viticulture. The methodology used in this case study shows places where there is a possibility to create vineyards. It may also be helpful for wine-makers to select grape varieties.Keywords: climatologic potential, climatic classification, Poland, viticulture
Procedia PDF Downloads 10610124 Numerical and Experimental Study of Heat Transfer Enhancement with Metal Foams and Ultrasounds
Authors: L. Slimani, A. Bousri, A. Hamadouche, H. Ben Hamed
Abstract:
The aim of this experimental and numerical study is to analyze the effects of acoustic streaming generated by 40 kHz ultrasonic waves on heat transfer in forced convection, with and without 40 PPI aluminum metal foam. Preliminary dynamic and thermal studies were done with COMSOL Multiphase, to see heat transfer enhancement degree by inserting a 40PPI metal foam (10 × 2 × 3 cm) on a heat sink, after having determined experimentally its permeability and Forchheimer's coefficient. The results obtained numerically are in accordance with those obtained experimentally, with an enhancement factor of 205% for a velocity of 0.4 m/s compared to an empty channel. The influence of 40 kHz ultrasound on heat transfer was also tested with and without metallic foam. Results show a remarkable increase in Nusselt number in an empty channel with an enhancement factor of 37,5%, while no influence of ultrasound on heat transfer in metal foam presence.Keywords: acoustic streaming, enhancing heat transfer, laminar flow, metal foam, ultrasound
Procedia PDF Downloads 13810123 Fusion Models for Cyber Threat Defense: Integrating Clustering, Random Forests, and Support Vector Machines to Against Windows Malware
Authors: Azita Ramezani, Atousa Ramezani
Abstract:
In the ever-escalating landscape of windows malware the necessity for pioneering defense strategies turns into undeniable this study introduces an avant-garde approach fusing the capabilities of clustering random forests and support vector machines SVM to combat the intricate web of cyber threats our fusion model triumphs with a staggering accuracy of 98.67 and an equally formidable f1 score of 98.68 a testament to its effectiveness in the realm of windows malware defense by deciphering the intricate patterns within malicious code our model not only raises the bar for detection precision but also redefines the paradigm of cybersecurity preparedness this breakthrough underscores the potential embedded in the fusion of diverse analytical methodologies and signals a paradigm shift in fortifying against the relentless evolution of windows malicious threats as we traverse through the dynamic cybersecurity terrain this research serves as a beacon illuminating the path toward a resilient future where innovative fusion models stand at the forefront of cyber threat defense.Keywords: fusion models, cyber threat defense, windows malware, clustering, random forests, support vector machines (SVM), accuracy, f1-score, cybersecurity, malicious code detection
Procedia PDF Downloads 7110122 An Eco-Systemic Typology of Fashion Resale Business Models in Denmark
Authors: Mette Dalgaard Nielsen
Abstract:
The paper serves the purpose of providing an eco-systemic typology of fashion resale business models in Denmark while pointing to possibilities to learn from its wisdom during a time when a fundamental break with the dominant linear fashion paradigm has become inevitable. As we transgress planetary boundaries and can no longer continue the unsustainable path of over-exploiting the Earth’s resources, the global fashion industry faces a tremendous need for change. One of the preferred answers to the fashion industry’s sustainability crises lies in the circular economy, which aims to maximize the utilization of resources by keeping garments in use for longer. Thus, in the context of fashion, resale business models that allow pre-owned garments to change hands with the purpose of being reused in continuous cycles are considered to be among the most efficient forms of circularity. Methodologies: The paper is based on empirical data from an ongoing project and a series of qualitative pilot studies that have been conducted on the Danish resale market over a 2-year time period from Fall 2021 to Fall 2023. The methodological framework is comprised of (n) ethnography and fieldwork in selected resale environments, as well as semi-structured interviews and a workshop with eight business partners from the Danish fashion and textiles industry. By focusing on the real-world circulation of pre-owned garments, which is enabled by the identified resale business models, the research lets go of simplistic hypotheses to the benefit of dynamic, vibrant and non-linear processes. As such, the paper contributes to the emerging research field of circular economy and fashion, which finds itself in a critical need to move from non-verified concepts and theories to empirical evidence. Findings: Based on the empirical data and anchored in the business partners, the paper analyses and presents five distinct resale business models with different product, service and design characteristics. These are 1) branded resale, 2) trade-in resale, 3) peer-2-peer resale, 4) resale boutiques and consignment shops and 5) resale shelf/square meter stores and flea markets. Together, the five business models represent a plurality of resale-promoting business model design elements that have been found to contribute to the circulation of pre-owned garments in various ways for different garments, users and businesses in Denmark. Hence, the provided typology points to the necessity of prioritizing several rather than single resale business model designs, services and initiatives for the resale market to help reconfigure the linear fashion model and create a circular-ish future. Conclusions: The article represents a twofold research ambition by 1) presenting an original, up-to-date eco-systemic typology of resale business models in Denmark and 2) using the typology and its eco-systemic traits as a tool to understand different business model design elements and possibilities to help fashion grow out of its linear growth model. By basing the typology on eco-systemic mechanisms and actual exemplars of resale business models, it becomes possible to envision the contours of a genuine alternative to business as usual that ultimately helps bend the linear fashion model towards circularity.Keywords: circular business models, circular economy, fashion, resale, strategic design, sustainability
Procedia PDF Downloads 5910121 Cluster Analysis of Retailers’ Benefits from Their Cooperation with Manufacturers: Business Models Perspective
Authors: M. K. Witek-Hajduk, T. M. Napiórkowski
Abstract:
A number of studies discussed the topic of benefits of retailers-manufacturers cooperation and coopetition. However, there are only few publications focused on the benefits of cooperation and coopetition between retailers and their suppliers of durable consumer goods; especially in the context of business model of cooperating partners. This paper aims to provide a clustering approach to segment retailers selling consumer durables according to the benefits they obtain from their cooperation with key manufacturers and differentiate the said retailers’ in term of the business models of cooperating partners. For the purpose of the study, a survey (with a CATI method) collected data on 603 consumer durables retailers present on the Polish market. Retailers are clustered both, with hierarchical and non-hierarchical methods. Five distinctive groups of consumer durables’ retailers are (based on the studied benefits) identified using the two-stage clustering approach. The clusters are then characterized with a set of exogenous variables, key of which are business models employed by the retailer and its partnering key manufacturer. The paper finds that the a combination of a medium sized retailer classified as an Integrator with a chiefly domestic capital and a manufacturer categorized as a Market Player will yield the highest benefits. On the other side of the spectrum is medium sized Distributor retailer with solely domestic capital – in this case, the business model of the cooperating manufactrer appears to be irreleveant. This paper is the one of the first empirical study using cluster analysis on primary data that defines the types of cooperation between consumer durables’ retailers and manufacturers – their key suppliers. The analysis integrates a perspective of both retailers’ and manufacturers’ business models and matches them with individual and joint benefits.Keywords: benefits of cooperation, business model, cluster analysis, retailer-manufacturer cooperation
Procedia PDF Downloads 25610120 Risk Assessment of Contamination by Heavy Metals in Sarcheshmeh Copper Complex of Iran Using Topsis Method
Authors: Hossein Hassani, Ali Rezaei
Abstract:
In recent years, the study of soil contamination problems surrounding mines and smelting plants has attracted some serious attention of the environmental experts. These elements due to the non- chemical disintegration and nature are counted as environmental stable and durable contaminants. Variability of these contaminants in the soil and the time and financial limitation for the favorable environmental application, in order to reduce the risk of their irreparable negative consequences on environment, caused to apply the favorable grading of these contaminant for the further success of the risk management processes. In this study, we use the contaminants factor risk indices, average concentration, enrichment factor and geoaccumulation indices for evaluating the metal contaminant of including Pb, Ni, Se, Mo and Zn in the soil of Sarcheshmeh copper mine area. For this purpose, 120 surface soil samples up to the depth of 30 cm have been provided from the study area. And the metals have been analyzed using ICP-MS method. Comparison of the heavy and potentially toxic elements concentration in the soil samples with the world average value of the uncontaminated soil and shale average indicates that the value of Zn, Pb, Ni, Se and Mo is higher than the world average value and only the Ni element shows the lower value than the shale average. Expert opinions on the relative importance of each indicators were used to assign a final weighting of the metals and the heavy metals were ranked using the TOPSIS approach. This allows us to carry out efficient environmental proceedings, leading to the reduction of environmental ricks form the contaminants. According to the results, Ni, Pb, Mo, Zn, and Se have the highest rate of risk contamination in the soil samples of the study area.Keywords: contamination coefficient, geoaccumulation factor, TOPSIS techniques, Sarcheshmeh copper complex
Procedia PDF Downloads 27410119 A Framework on Data and Remote Sensing for Humanitarian Logistics
Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini
Abstract:
Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making
Procedia PDF Downloads 37810118 Quality of Bali Beef and Broiler after Immersion in Liquid Smoke on Different Concentrations and Storage Times
Authors: E. Abustam, M. Yusuf, H. M. Ali, M. I. Said, F. N. Yuliati
Abstract:
The aim of this study was to improve the durability and quality of Bali beef (M. Longissimus dorsi) and broiler carcass through the addition of liquid smoke as a natural preservative. This study was using Longissimus dorsi muscle from male Bali beef aged 3 years, broiler breast and thigh aged 40 days. Three types of meat were marinated in liquid smoke with concentrations of 0, 5, and 10% for 30 minutes at the level of 20% of the sample weight (w/w). The samples were storage at 2-5°C for 1 month. This study designed as a factorial experiment 3 x 3 x 4 based on a completely randomized design with 5 replications; the first factor was meat type (beef, chicken breast and chicken thigh); the 2nd factor was liquid smoke concentrations (0, 5, and 10%), and the 3rd factor was storage duration (1, 2, 3, and 4 weeks). Parameters measured were TBA value, total bacterial colonies, water holding capacity (WHC), shear force value both before and after cooking (80°C – 15min.), and cooking loss. The results showed that the type of meat produced WHC, shear force value, cooking loss and TBA differed between the three types of meat. Higher concentration of liquid smoke, the WHC, shear force value, TBA, and total bacterial colonies were decreased; at a concentration of 10% of liquid smoke, the total bacterial colonies decreased by 57.3% from untreated with liquid smoke. Longer storage, the total bacterial colonies and WHC were increased, while the shear force value and cooking loss were decreased. It can be concluded that a 10% concentration of liquid smoke was able to maintain fat oxidation and bacterial growth in Bali beef and chicken breast and thigh.Keywords: Bali beef, chicken meat, liquid smoke, meat quality
Procedia PDF Downloads 39210117 Reed: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models
Authors: Bipasha Sen, Aditya Agarwal
Abstract:
Multilingual automatic speech recognition (ASR) system is a single entity capable of transcribing multiple languages sharing a common phone space. Performance of such a system is highly dependent on the compatibility of the languages. State of the art speech recognition systems are built using sequential architectures based on recurrent neural networks (RNN) limiting the computational parallelization in training. This poses a significant challenge in terms of time taken to bootstrap and validate the compatibility of multiple languages for building a robust multilingual system. Complex architectural choices based on self-attention networks are made to improve the parallelization thereby reducing the training time. In this work, we propose Reed, a simple system based on 1D convolutions which uses very short context to improve the training time. To improve the performance of our system, we use raw time-domain speech signals directly as input. This enables the convolutional layers to learn feature representations rather than relying on handcrafted features such as MFCC. We report improvement on training and inference times by atleast a factor of 4x and 7.4x respectively with comparable WERs against standard RNN based baseline systems on SpeechOcean's multilingual low resource dataset.Keywords: convolutional neural networks, language compatibility, low resource languages, multilingual automatic speech recognition
Procedia PDF Downloads 12310116 Electricity Demand Modeling and Forecasting in Singapore
Authors: Xian Li, Qing-Guo Wang, Jiangshuai Huang, Jidong Liu, Ming Yu, Tan Kok Poh
Abstract:
In power industry, accurate electricity demand forecasting for a certain leading time is important for system operation and control, etc. In this paper, we investigate the modeling and forecasting of Singapore’s electricity demand. Several standard models, such as HWT exponential smoothing model, the ARMA model and the ANNs model have been proposed based on historical demand data. We applied them to Singapore electricity market and proposed three refinements based on simulation to improve the modeling accuracy. Compared with existing models, our refined model can produce better forecasting accuracy. It is demonstrated in the simulation that by adding forecasting error into the forecasting equation, the modeling accuracy could be improved greatly.Keywords: power industry, electricity demand, modeling, forecasting
Procedia PDF Downloads 64010115 Learn through AR (Augmented Reality)
Authors: Prajakta Musale, Bhargav Parlikar, Sakshi Parkhi, Anshu Parihar, Aryan Parikh, Diksha Parasharam, Parth Jadhav
Abstract:
AR technology is basically a development of VR technology that harnesses the power of computers to be able to read the surroundings and create projections of digital models in the real world for the purpose of visualization, demonstration, and education. It has been applied to education, fields of prototyping in product design, development of medical models, battle strategy in the military and many other fields. Our Engineering Design and Innovation (EDAI) project focuses on the usage of augmented reality, visual mapping, and 3d-visualization along with animation and text boxes to help students in fields of education get a rough idea of the concepts such as flow and mechanical movements that may be hard to visualize at first glance.Keywords: spatial mapping, ARKit, depth sensing, real-time rendering
Procedia PDF Downloads 6310114 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models
Authors: Anastasiia Yu. Timofeeva
Abstract:
Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression
Procedia PDF Downloads 416