Search results for: simulation model
250 Green Lean TQM Human Resource Management Practices in Malaysian Automotive Companies
Authors: Noor Azlina Mohd Salleh, Salmiah Kasolang, Ahmed Jaffar
Abstract:
Green Lean Total Quality Management (LTQM) Human Resource Management (HRM) System is a system comprises of HRM in Environmental Management System (EMS) practices which is integrated to TQM with Lean Manufacturing (LM) principles. HRM is essential especially in dealing with low motivation and less productive employees. The ultimate goal of this system is to focus on achieving total human resource development that is motivated and capable to optimize their creativity to be a part of Green and Lean TQM organization. A survey questionnaire was developed and distributed to 30 highly active automotive vendors in Malaysia and analyzed by Minitab v16 and SPSS v17. It was found out companies that are practicing Green LTQM HRM practices have generated more revenue and have RND capability. However, years of company establishment do not affect the openness of the company to adapt new initiatives that can help to improve the effectiveness of the operations. It was also found out the importance of training, communication and rewards for employees. The Green LTQM HRM practices framework model established in this study hopefully will give preliminary insight especially to companies that are still looking for system that can improve their productivity from managing human resource. This is preliminary study that combined 4 awards practices, ISO/TS16949, Toyota Production System SAEJ4000, MAJAICO Lean Production System and EMS focusing on highly active companies that have been involved in MAJAICO Program and Proton Vendor Development Program. Future study can be conducted to know the status at other industry as well as case study pertaining to this system.
Keywords: Automotive Industry, Lean Manufacturing, Operational Engineering Management, Total Quality Management. Environmental Management System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4188249 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises
Authors: Jiří F. Urbánek, David Král
Abstract:
Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.Keywords: Blazons, computational assistance, DYVELOP method, small and middle enterprises.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 703248 Automated Fact-Checking By Incorporating Contextual Knowledge and Multi-Faceted Search
Authors: Wenbo Wang, Yi-fang Brook Wu
Abstract:
The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state of the art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study presents a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive and authoritative data; 2) developing a search function to automatically select relevant, new and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that: 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graph in Wikidata to dynamically augment the representations of claims and references without introducing too much noises; II) exploring semantic relations in claims and references to further enhance fact-checking.
Keywords: Fact checking, claim verification, Deep Learning, Natural Language Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83247 Influence of the Seat Arrangement in Public Reading Spaces on Individual Subjective Perceptions
Authors: Jo-Han Chang, Chung-Jung Wu
Abstract:
This study involves a design proposal. The objective of is to create a seat arrangement model for public reading spaces that enable free arrangement without disturbing the users. Through a subjective perception scale, this study explored whether distance between seats and direction of seats influence individual subjective perceptions in a public reading space. This study also involves analysis of user subjective perceptions when reading in the settings on 3 seats at different directions and with 5 distances between seats. The results may be applied to public chair design. This study investigated that (a) whether different directions of seats and distances between seats influence individual subjective perceptions and (b) the acceptable personal space between 2 strangers in a public reading space. The results are shown as follows: (a) the directions of seats and distances between seats influenced individual subjective perceptions. (b) subjective evaluation scores were higher for back-to-back seat directions with Distances A (10cm) and B (62cm) compared with face-to-face and side-by-side seat directions; however, when the seat distance exceeded 114cm (Distance C), no difference existed among the directions of seats. (c) regarding reading in public spaces, when the distance between seats is 10cm only, we recommend arranging the seats in a back-to-back fashion to increase user comfort and arrangement of face-to-face and side- by-side seat directions should be avoided. When the seatarrangement is limited to face-to-face design, the distance between seats should be increased to at least 62cm. Moreover, the distance between seats should be increased to at least 114cm for side- by-side seats to elevate user comfort.
Keywords: Individual Subjective Perceptions, Personal Space, Seat Arrangement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923246 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based On Li-ion Battery and Solar Energy Supply
Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan
Abstract:
Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries.
In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.
Keywords: ZigBee, Li-ion battery, solar panel, CC2530.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3092245 The Study of Implications on Modern Businesses Performances by Digital Communities: Case of Data Leak
Authors: Asim Majeed, Anwar Ul Haq, Mike, Lloyd-Williams, Arshad Jamal, Usman Butt
Abstract:
This study aims to investigate the impact of data leak of M&S customers on digital communities. Modern businesses are using digital communities as an important public relations tool for marketing purposes. This form of communication helps companies to build better relationship with their customers which also act as another source of information. The communication between the customers and the organizations is not regulated so users may post positive and negative comments. There are new platforms being developed on a daily basis and it is very crucial for the businesses to not only get themselves familiar with those but also know how to reach their existing and perspective consumers. The driving force of marketing and communication in modern businesses is the digital communities and these are continuously increasing and developing. This phenomenon is changing the way marketing is conducted. The current research has discussed the implications on M&S business performance since the data was exploited on digital communities; users contacted M&S and raised the security concerns. M&S closed down its website for few hours to try to resolve the issue. The next day M&S made a public apology about this incidence. This information was proliferated on various digital communities and it has impacted negatively on M&S brand name, sales and customers. The content analysis approach is being used to collect qualitative data from 100 digital bloggers including social media communities such as Facebook and Twitter. The results and finding provide useful new insights into the nature and form of security concerns of digital users. Findings have theoretical and practical implications. This research will showcase a large corporation utilizing various digital community platforms and can serve as a model for future organizations.
Keywords: Digital, communities, performance, dissemination, implications, data, exploitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817244 Effect of High-Energy Ball Milling on the Electrical and Piezoelectric Properties of (K0.5Na0.5)(Nb0.9Ta0.1)O3 Lead-Free Piezoceramics
Authors: Chongtham Jiten, K. Chandramani Singh, Radhapiyari Laishram
Abstract:
Nanocrystalline powders of the lead-free piezoelectric material, tantalum-substituted potassium sodium niobate (K0.5Na0.5)(Nb0.9Ta0.1)O3 (KNNT), were produced using a Retsch PM100 planetary ball mill by setting the milling time to 15h, 20h, 25h, 30h, 35h and 40h, at a fixed speed of 250rpm. The average particle size of the milled powders was found to decrease from 12nm to 3nm as the milling time increases from 15h to 25h, which is in agreement with the existing theoretical model. An anomalous increase to 98nm and then a drop to 3nm in the particle size were observed as the milling time further increases to 30h and 40h respectively. Various sizes of these starting KNNT powders were used to investigate the effect of milling time on the microstructure, dielectric properties, phase transitions and piezoelectric properties of the resulting KNNT ceramics. The particle size of starting KNNT was somewhat proportional to the grain size. As the milling time increases from 15h to 25h, the resulting ceramics exhibit enhancement in the values of relative density from 94.8% to 95.8%, room temperature dielectric constant (εRT) from 878 to 1213, and piezoelectric charge coefficient (d33) from 108pC/N to 128pC/N. For this range of ceramic samples, grain size refinement suppresses the maximum dielectric constant (εmax), shifts the Curie temperature (Tc) to a lower temperature and the orthorhombic-tetragonal phase transition (Tot) to a higher temperature. Further increase of milling time from 25h to 40h produces a gradual degradation in the values of relative density, εRT, and d33 of the resulting ceramics.
Keywords: Ceramics, Dielectric, High-energy milling, Perovskite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2596243 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media
Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding
Abstract:
A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.Keywords: Discrete elements, Hertzian Contact, polydispersity, weakly nonlinear, wave propagation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922242 Analysis of Vortex-Induced Vibration Characteristics for a Three-Dimensional Flexible Tube
Authors: Zhipeng Feng, Huanhuan Qi, Pingchuan Shen, Fenggang Zang, Yixiong Zhang
Abstract:
Numerical simulations of vortex-induced vibration of a three-dimensional flexible tube under uniform turbulent flow are calculated when Reynolds number is 1.35×104. In order to achieve the vortex-induced vibration, the three-dimensional unsteady, viscous, incompressible Navier-Stokes equation and LES turbulence model are solved with the finite volume approach, the tube is discretized according to the finite element theory, and its dynamic equilibrium equations are solved by the Newmark method. The fluid-tube interaction is realized by utilizing the diffusion-based smooth dynamic mesh method. Considering the vortex-induced vibration system, the variety trends of lift coefficient, drag coefficient, displacement, vertex shedding frequency, phase difference angle of tube are analyzed under different frequency ratios. The nonlinear phenomena of locked-in, phase-switch are captured successfully. Meanwhile, the limit cycle and bifurcation of lift coefficient and displacement are analyzed by using trajectory, phase portrait, and Poincaré sections. The results reveal that: when drag coefficient reaches its minimum value, the transverse amplitude reaches its maximum, and the “lock-in” begins simultaneously. In the range of lock-in, amplitude decreases gradually with increasing of frequency ratio. When lift coefficient reaches its minimum value, the phase difference undergoes a suddenly change from the “out-of-phase” to the “in-phase” mode.
Keywords: Vortex induced vibration, limit cycle, CFD, FEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469241 Quantifying the UK’s Future Thermal Electricity Generation Water Use: Regional Analysis
Authors: Daniel Murrant, Andrew Quinn, Lee Chapman
Abstract:
A growing population has led to increasing global water and energy demand. This demand, combined with the effects of climate change and an increasing need to maintain and protect the natural environment, represents a potentially severe threat to many national infrastructure systems. This has resulted in a considerable quantity of published material on the interdependencies that exist between the supply of water and the thermal generation of electricity, often known as the water-energy nexus. Focusing specifically on the UK, there is a growing concern that the future availability of water may at times constrain thermal electricity generation, and therefore hinder the UK in meeting its increasing demand for a secure, and affordable supply of low carbon electricity. To provide further information on the threat the water-energy nexus may pose to the UK’s energy system, this paper models the regional water demand of UK thermal electricity generation in 2030 and 2050. It uses the strategically important Energy Systems Modelling Environment model developed by the Energy Technologies Institute. Unlike previous research, this paper was able to use abstraction and consumption factors specific to UK power stations. It finds that by 2050 the South East, Yorkshire and Humber, the West Midlands and North West regions are those with the greatest freshwater demand and therefore most likely to suffer from a lack of resource. However, it finds that by 2050 it is the East, South West and East Midlands regions with the greatest total water (fresh, estuarine and seawater) demand and the most likely to be constrained by environmental standards.
Keywords: Water-energy nexus, water resources, abstraction, climate change, power station cooling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549240 Dislocation Modelling of the 1997-2009 High-Precision Global Positioning System Displacements in Darjiling- Sikkim Himalaya, India
Authors: Kutubuddin Ansari, Malay Mukul, Sridevi Jade
Abstract:
We used high-precision Global Positioning System (GPS) to geodetically constrain the motion of stations in the Darjiling-Sikkim Himalayan (DSH) wedge and examine the deformation at the Indian-Tibetan plate boundary using IGS (International GPS Service) fiducial stations. High-precision GPS based displacement and velocity field was measured in the DSH between 1997 and 2009. To obtain additional insight north of the Indo-Tibetan border and in the Darjiling-Sikkim-Tibet (DaSiT) wedge, published velocities from four stations J037, XIGA, J029 and YADO were also included in the analysis. India-fixed velocities or the back-slip was computed relative to the pole of rotation of the Indian Plate (Latitude 52.97 ± 0.22º, Longitude - 0.30 ± 3.76º, and Angular Velocity 0.500 ± 0.008º/ Myr) in the DaSiT wedge. Dislocation modelling was carried out with the back-slip to model the best possible solution of a finite rectangular dislocation or the causative fault based on dislocation theory that produced the observed back-slip using a forward modelling approach. To find the best possible solution, three different models were attempted. First, slip along a single thrust fault, then two thrust faults and in finally, three thrust faults were modelled to simulate the back-slip in the DaSiT wedge. The three-fault case bests the measured displacements and is taken as the best possible solution.
Keywords: Global Positioning System, Darjiling-Sikkim Himalaya, Dislocation modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104239 Conciliation Bodies as an Effective Tool for the Enforcement of Air Passenger Rights: Examination of an Exemplary Model in Germany
Authors: C. Hipp
Abstract:
The EU Regulation (EC) No 261/2004 under which air passengers can claim compensation in the event of denied boarding, cancellation or long delay of flights has to be regarded as a substantial progress for the consumer protection in the field of air transport since it went into force in February 2005. Nevertheless, different reviews of its effective functioning demonstrate that most passengers affected by service disruptions do not enforce their complaints and claims towards the airline. The main cause of this is not only the unclear legal situation due to the fact that the regulation itself suffers from many undetermined terms and loopholes it is also attributable to the strategy of the airlines which do not handle the complaints of the passengers or exclude their duty to compensate them. Economically contemplated, reasons like the long duration of a trial and the cost risk in relation to the amount of compensation make it comprehensible that passengers are deterred from enforcing their rights by filing a lawsuit. The paper focusses on the alternative dispute resolution namely the recently established conciliation bodies which deal with air passenger rights. In this paper, the Conciliation Body for Public Transport in Germany (Schlichtungsstelle für den öffentlichen Personenverkehr – SÖP) is examined as a successful example of independent consumer arbitration service. It was founded in 2009 and deals with complaints in the field of air passenger rights since November 2013. According to the current situation one has to admit that due to its structure and operation it meets on the one hand the needs of the airlines by giving them an efficient tool of their customer relation management and on the other hand that it contributes to the enforcement of air passenger rights effectively.
Keywords: Air passenger rights, alternative dispute resolution (ADR), consumer protection, EU law regulation (EC) No 261/2004.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1151238 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel
Abstract:
In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655237 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions
Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang
Abstract:
Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.
Keywords: Computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 774236 Fuzzy Control of Thermally Isolated Greenhouse Building by Utilizing Underground Heat Exchanger and Outside Weather Conditions
Authors: Raghad Alhusari, Farag Omar, Moustafa Fadel
Abstract:
A traditional greenhouse is a metal frame agricultural building used for cultivation plants in a controlled environment isolated from external climatic changes. Using greenhouses in agriculture is an efficient way to reduce the water consumption, where agriculture field is considered the biggest water consumer world widely. Controlling greenhouse environment yields better productivity of plants but demands an increase of electric power. Although various control approaches have been used towards greenhouse automation, most of them are applied to traditional greenhouses with ventilation fans and/or evaporation cooling system. Such approaches are still demanding high energy and water consumption. The aim of this research is to develop a fuzzy control system that minimizes water and energy consumption by utilizing outside weather conditions and underground heat exchanger to maintain the optimum climate of the greenhouse. The proposed control system is implemented on an experimental model of thermally isolated greenhouse structure with dimensions of 6x5x2.8 meters. It uses fans for extracting heat from the ground heat exchanger system, motors for automatic open/close of the greenhouse windows and LED as lighting system. The controller is integrated also with environmental condition sensors. It was found that using the air-to-air horizontal ground heat exchanger with 90 mm diameter and 2 mm thickness placed 2.5 m below the ground surface results in decreasing the greenhouse temperature of 3.28 ˚C which saves around 3 kW of consumed energy. It also eliminated the water consumption needed in evaporation cooling systems which are traditionally used for cooling the greenhouse environment.Keywords: Automation, earth-to-air heat exchangers, fuzzy control, greenhouse, sustainable buildings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 697235 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.
Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408234 Measurements of MRI R2* Relaxation Rate in Liver and Muscle: Animal Model
Authors: Chiung-Yun Chang, Po-Chou Chen, Jiun-Shiang Tzeng, Ka-Wai Mac, Chia-Chi Hsiao, Jo-Chi Jao
Abstract:
This study was aimed to measure effective transverse relaxation rates (R2*) in the liver and muscle of normal New Zealand White (NZW) rabbits. R2* relaxation rate has been widely used in various hepatic diseases for iron overload by quantifying iron contents in liver. R2* relaxation rate is defined as the reciprocal of T2* relaxation time and mainly depends on the constituents of tissue. Different tissues would have different R2* relaxation rates. The signal intensity decay in Magnetic resonance imaging (MRI) may be characterized by R2* relaxation rates. In this study, a 1.5T GE Signa HDxt whole body MR scanner equipped with an 8-channel high resolution knee coil was used to observe R2* values in NZW rabbit’s liver and muscle. Eight healthy NZW rabbits weighted 2 ~ 2.5 kg were recruited. After anesthesia using Zoletil 50 and Rompun 2% mixture, the abdomen of rabbit was landmarked at the center of knee coil to perform 3-plane localizer scan using fast spoiled gradient echo (FSPGR) pulse sequence. Afterwards, multi-planar fast gradient echo (MFGR) scans were performed with 8 various echo times (TEs) to acquire images for R2* measurements. Regions of interest (ROIs) at liver and muscle were measured using Advantage workstation. Finally, the R2* was obtained by a linear regression of ln(sı) on TE. The results showed that the longer the echo time, the smaller the signal intensity. The R2* values of liver and muscle were 44.8 ± 10.9 s-1 and 37.4 ± 9.5 s-1, respectively. It implies that the iron concentration of liver is higher than that of muscle. In conclusion, the more the iron contents in tissue, the higher the R2*. The correlations between R2* and iron content in NZW rabbits might be valuable for further exploration.Keywords: Liver, MRI, multi-planar fast gradient echo, muscle, R2* relaxation rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2150233 Quality Classification and Monitoring Using Adaptive Metric Distance and Neural Networks: Application in Pickling Process
Authors: S. Bouhouche, M. Lahreche, S. Ziani, J. Bast
Abstract:
Modern manufacturing facilities are large scale, highly complex, and operate with large number of variables under closed loop control. Early and accurate fault detection and diagnosis for these plants can minimise down time, increase the safety of plant operations, and reduce manufacturing costs. Fault detection and isolation is more complex particularly in the case of the faulty analog control systems. Analog control systems are not equipped with monitoring function where the process parameters are continually visualised. In this situation, It is very difficult to find the relationship between the fault importance and its consequences on the product failure. We consider in this paper an approach to fault detection and analysis of its effect on the production quality using an adaptive centring and scaling in the pickling process in cold rolling. The fault appeared on one of the power unit driving a rotary machine, this machine can not track a reference speed given by another machine. The length of metal loop is then in continuous oscillation, this affects the product quality. Using a computerised data acquisition system, the main machine parameters have been monitored. The fault has been detected and isolated on basis of analysis of monitored data. Normal and faulty situation have been obtained by an artificial neural network (ANN) model which is implemented to simulate the normal and faulty status of rotary machine. Correlation between the product quality defined by an index and the residual is used to quality classification.Keywords: Modeling, fault detection and diagnosis, parameters estimation, neural networks, Fault Detection and Diagnosis (FDD), pickling process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577232 A Qualitative Evidence of the Markedness of Code Switching during Commercial Bank Service Encounters in Ìbàdàn Metropolis
Authors: A. Robbin
Abstract:
In a multilingual setting like Nigeria, the success of service encounters is enhanced by the use of a language that ensures the linguistic and persuasive demands of the interlocutors. This study examined motivations for code switching as a negotiation strategy in bank-hall desk service encounters in Ìbàdàn metropolis using Myers-Scotton’s exploration on markedness in language use. The data consisted of transcribed audio recording of bank-hall service encounters, and direct observation of bank interactions in two purposively sampled commercial banks in Ìbàdàn metropolis. The data was subjected to descriptive linguistic analysis using Myers Scotton’s Markedness Model. Findings reveal that code switching is frequently employed during different stages of service encounter: greeting, transaction and closing to fulfil relational, bargaining and referential functions. Bank staff and customers code switch to make unmarked, marked and explanatory choices. A strategy used to identify with customer’s cultural affiliation, close status gap, and appeal to begrudged customer; or as an explanatory choice with non-literate customers for ease of communication. Bankers select English to maintain customers’ perceptions of prestige which is retained or diverged from depending on their linguistic preference or ability. Yoruba is seen as an efficient negotiation strategy with both bankers and their customers, making choices within conversation to achieve desired conversational and functional aims.
Keywords: Markedness, bilingualism, code switching, service encounter, banking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702231 Fuzzy Optimization in Metabolic Systems
Authors: Feng-Sheng Wang, Wu-Hsiung Wu, Kai-Cheng Hsu
Abstract:
The optimization of biological systems, which is a branch of metabolic engineering, has generated a lot of industrial and academic interest for a long time. In the last decade, metabolic engineering approaches based on mathematical optimizations have been used extensively for the analysis and manipulation of metabolic networks. In practical optimization of metabolic reaction networks, designers have to manage the nature of uncertainty resulting from qualitative characters of metabolic reactions, e.g., the possibility of enzyme effects. A deterministic approach does not give an adequate representation for metabolic reaction networks with uncertain characters. Fuzzy optimization formulations can be applied to cope with this problem. A fuzzy multi-objective optimization problem can be introduced for finding the optimal engineering interventions on metabolic network systems considering the resilience phenomenon and cell viability constraints. The accuracy of optimization results depends heavily on the development of essential kinetic models of metabolic networks. Kinetic models can quantitatively capture the experimentally observed regulation data of metabolic systems and are often used to find the optimal manipulation of external inputs. To address the issues of optimizing the regulatory structure of metabolic networks, it is necessary to consider qualitative effects, e.g., the resilience phenomena and cell viability constraints. Combining the qualitative and quantitative descriptions for metabolic networks makes it possible to design a viable strain and accurately predict the maximum possible flux rates of desired products. Considering the resilience phenomena in metabolic networks can improve the predictions of gene intervention and maximum synthesis rates in metabolic engineering. Two case studies will present in the conference to illustrate the phenomena.
Keywords: Fuzzy multi-objective optimization problem, kinetic model, metabolic engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020230 Integrated Modeling of Transformation of Electricity and Transportation Sectors: A Case Study of Australia
Authors: T. Aboumahboub, R. Brecha, H. B. Shrestha, U. F. Hutfilter, A. Geiges, W. Hare, M. Schaeffer, L. Welder, M. Gidden
Abstract:
The proposed stringent mitigation targets require an immediate start for a drastic transformation of the whole energy system. The current Australian energy system is mainly centralized and fossil fuel-based in most states with coal and gas-fired plants dominating the total produced electricity over the recent past. On the other hand, the country is characterized by a huge, untapped renewable potential, where wind and solar energy could play a key role in the decarbonization of the Australia’s future energy system. However, integrating high shares of such variable renewable energy sources (VRES) challenges the power system considerably due to their temporal fluctuations and geographical dispersion. This raises the concerns about flexibility gap in the system to ensure the security of supply with increasing shares of such intermittent sources. One main flexibility dimension to facilitate system integration of high shares of VRES is to increase the cross-sectoral integration through coupling of electricity to other energy sectors alongside the decarbonization of the power sector and reinforcement of the transmission grid. This paper applies a multi-sectoral energy system optimization model for Australia. We investigate the cost-optimal configuration of a renewable-based Australian energy system and its transformation pathway in line with the ambitious range of proposed climate change mitigation targets. We particularly analyse the implications of linking the electricity and transport sectors in a prospective, highly renewable Australian energy system.
Keywords: Decarbonization, energy system modeling, sector coupling, variable renewable energies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597229 An Integrated Experimental and Numerical Approach to Develop an Electronic Instrument to Study Apple Bruise Damage
Authors: Paula Pascoal-Faria, Rúben Pereira, Elodie Pinto, Miguel Belbut, Ana Rosa, Inês Sousa, Nuno Alves
Abstract:
Apple bruise damage from harvesting, handling, transporting and sorting is considered to be the major source of reduced fruit quality, resulting in loss of profits for the entire fruit industry. The three factors which can physically cause fruit bruising are vibration, compression load and impact, the latter being the most common source of bruise damage. Therefore, prediction of the level of damage, stress distribution and deformation of the fruits under external force has become a very important challenge. In this study, experimental and numerical methods were used to better understand the impact caused when an apple is dropped from different heights onto a plastic surface and a conveyor belt. Results showed that the extent of fruit damage is significantly higher for plastic surface, being dependent on the height. In order to support the development of a biomimetic electronic device for the determination of fruit damage, the mechanical properties of the apple fruit were determined using mechanical tests. Preliminary results showed different values for the Young’s modulus according to the zone of the apple tested. Along with the mechanical characterization of the apple fruit, the development of the first two prototypes is discussed and the integration of the results obtained to construct the final element model of the apple is presented. This work will help to reduce significantly the bruise damage of fruits or vegetables during the entire processing which will allow the introduction of exportation destines and consequently an increase in the economic profits in this sector.
Keywords: Apple, fruit damage, impact during crop and post-crop, mechanical characterization of the apple, numerical evaluation of fruit bruise damage, electronic device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549228 Topping Failure Analysis of Anti-Dip Bedding Rock Slopes Subjected to Crest Loads
Authors: Chaoyi Sun, Congxin Chen, Yun Zheng, Kaizong Xia, Wei Zhang
Abstract:
Crest loads are often encountered in hydropower, highway, open-pit and other engineering rock slopes. Toppling failure is one of the most common deformation failure types of anti-dip bedding rock slopes. Analysis on such failure of anti-dip bedding rock slopes subjected to crest loads has an important influence on engineering practice. Based on the step-by-step analysis approach proposed by Goodman and Bray, a geo-mechanical model was developed, and the related analysis approach was proposed for the toppling failure of anti-dip bedding rock slopes subjected to crest loads. Using the transfer coefficient method, a formulation was derived for calculating the residual thrust of slope toe and the support force required to meet the requirements of the slope stability under crest loads, which provided a scientific reference to design and support for such slopes. Through slope examples, the influence of crest loads on the residual thrust and sliding ratio coefficient was investigated for cases of different block widths and slope cut angles. The results show that there exists a critical block width for such slope. The influence of crest loads on the residual thrust is non-negligible when the block thickness is smaller than the critical value. Moreover, the influence of crest loads on the slope stability increases with the slope cut angle and the sliding ratio coefficient of anti-dip bedding rock slopes increases with the crest loads. Finally, the theoretical solutions and numerical simulations using Universal Distinct Element Code (UDEC) were compared, in which the consistent results show the applicability of both approaches.
Keywords: Anti-dip slopes, crest loads, stability analysis, toppling failure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906227 Shear Strength of Reinforced Web Openings in Steel Beams
Authors: K. S. Sivakumaran, Bo Chen
Abstract:
The floor beams of steel buildings, cold-formed steel floor joists in particular, often require large web openings, which may affect their shear capacities. A cost effective way to mitigate the detrimental effects of such openings is to weld/fasten reinforcements. A difficulty associated with an experimental investigation to establish suitable reinforcement schemes for openings in shear zone is that moment always coexists with the shear, and thus, it is impossible to create pure shear state in experiments, resulting in moment influenced results. However, Finite Element Method (FEM) based analysis can be conveniently used to investigate the pure shear behaviour of webs including webs with reinforced openings. This paper presents the details associated with the finite element analysis of thick/thin-plates (representing the web of hot-rolled steel beam, and the web of a cold-formed steel member) having a large reinforced opening. The study considered simply-supported rectangular plates subjected to in-plane shear loadings until failure (including post-buckling behaviour). The plate was modelled using geometrically non-linear quadrilateral shell elements, and non-linear stress-strain relationship based on experiments. Total Langrangian with large displacement/small strain formulation was used for such analyses. The model also considered the initial geometric imperfections. This study considered three reinforcement schemes, namely, flat, lip, and angle reinforcements. This paper discusses the modelling considerations and presents the results associated with the various reinforcement schemes under consideration.
Keywords: Cold-formed steel, finite element analysis, opening, reinforcement, shear resistance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2021226 Korea and Japan Economic Relations: An Analysis through the World Trade Organization
Authors: Caroline S. Dutra, Tatiana C. Squeff
Abstract:
It is well known that the history between South Korea and Japan influences their international relations; thus, also encompassing their economic relations. In this sense, it is impossible to analyze the latter without understanding the development of the former, which is known for episodes of hostility, like on Japanese colonization, but also had moments of cultural and trade interexchange. Indeed, since 1965, with the establishment of diplomatic relations between both countries, their trade relations have improved, especially after both nations have signed the General Agreement on Tariffs and Trade (GATT). Thereafter, with the establishment of the World Trade Organization (WTO) in 1995, another chapter of their diplomatic and economic relations have been inaugurated. Hence, bearing in mind this history between both nations, this research intends to examine their relations through the analysis of the WTO panels they have engaged in between each other, which are, in chronological order, “DS323: Japan – Import Quotas on Dried Laver and Seasoned Laver”, “DS336: Japan - Countervailing Duties on Dynamic Random Access Memories from Korea”, “DS495: Korea - Import Band, and Testing and Certification Requirements for Radionuclides”, “DS553: Korea - Sunset Review of Anti-Dumping Duties on Stainless Steel Bars” and “DS571: Korea - Measures Affecting Trade in Commercial Vessels”. The objective of this case analysis is to point out what are the areas that are more conflictual between Japan and South Korea in regard to their economic relations so that it is possible to assert on their future (economic) relations and other possible outcomes. And in order to do so, bibliographic and documental research will be made, particularly those involving the WTO and the nations under consideration. Regarding the methods used, it is important to highlight that this is applied research in the field of international economic relations and international law, which follows a hypothetic-deductive model.
Keywords: International economic relations, Japan, South Korea, World Trade Organization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960225 Non-Linear Load-Deflection Response of Shape Memory Alloys-Reinforced Composite Cylindrical Shells under Uniform Radial Load
Authors: Behrang Tavousi Tehrani, Mohammad-Zaman Kabir
Abstract:
Shape memory alloys (SMA) are often implemented in smart structures as the active components. Their ability to recover large displacements has been used in many applications, including structural stability/response enhancement and active structural acoustic control. SMA wires or fibers can be embedded with composite cylinders to increase their critical buckling load, improve their load-deflection behavior, and reduce the radial deflections under various thermo-mechanical loadings. This paper presents a semi-analytical investigation on the non-linear load-deflection response of SMA-reinforced composite circular cylindrical shells. The cylinder shells are under uniform external pressure load. Based on first-order shear deformation shell theory (FSDT), the equilibrium equations of the structure are derived. One-dimensional simplified Brinson’s model is used for determining the SMA recovery force due to its simplicity and accuracy. Airy stress function and Galerkin technique are used to obtain non-linear load-deflection curves. The results are verified by comparing them with those in the literature. Several parametric studies are conducted in order to investigate the effect of SMA volume fraction, SMA pre-strain value, and SMA activation temperature on the response of the structure. It is shown that suitable usage of SMA wires results in a considerable enhancement in the load-deflection response of the shell due to the generation of the SMA tensile recovery force.
Keywords: Airy stress function, cylindrical shell, Galerkin technique, load-deflection curve, recovery stress, shape memory alloy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 714224 Influence of Local Soil Conditions on Optimal Load Factors for Seismic Design of Buildings
Authors: Miguel A. Orellana, Sonia E. Ruiz, Juan Bojórquez
Abstract:
Optimal load factors (dead, live and seismic) used for the design of buildings may be different, depending of the seismic ground motion characteristics to which they are subjected, which are closely related to the type of soil conditions where the structures are located. The influence of the type of soil on those load factors, is analyzed in the present study. A methodology that is useful for establishing optimal load factors that minimize the cost over the life cycle of the structure is employed; and as a restriction, it is established that the probability of structural failure must be less than or equal to a prescribed value. The life-cycle cost model used here includes different types of costs. The optimization methodology is applied to two groups of reinforced concrete buildings. One set (consisting on 4-, 7-, and 10-story buildings) is located on firm ground (with a dominant period Ts=0.5 s) and the other (consisting on 6-, 12-, and 16-story buildings) on soft soil (Ts=1.5 s) of Mexico City. Each group of buildings is designed using different combinations of load factors. The statistics of the maximums inter-story drifts (associated with the structural capacity) are found by means of incremental dynamic analyses. The buildings located on firm zone are analyzed under the action of 10 strong seismic records, and those on soft zone, under 13 strong ground motions. All the motions correspond to seismic subduction events with magnitudes M=6.9. Then, the structural damage and the expected total costs, corresponding to each group of buildings, are estimated. It is concluded that the optimal load factors combination is different for the design of buildings located on firm ground than that for buildings located on soft soil.
Keywords: Life-cycle cost, optimal load factors, reinforced concrete buildings, total costs, type of soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 901223 Statistical Analysis and Optimization of a Process for CO2 Capture
Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi
Abstract:
CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.
Keywords: Bubble column reactor, CO2 capture, Response Surface Methodology, water desalination.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845222 An Overall Approach to the Communication of Organizations in Conventional and Virtual Offices
Authors: Mehmet Altınöz
Abstract:
Organizational communication is an administrative function crucial especially for executives in the implementation of organizational and administrative functions. Executives spend a significant part of their time on communicative activities. Doing his or her daily routine, arranging meeting schedules, speaking on the telephone, reading or replying to business correspondence, or fulfilling the control functions within the organization, an executive typically engages in communication processes. Efficient communication is the principal device for the adequate implementation of administrative and organizational activities. For this purpose, management needs to specify the kind of communication system to be set up and the kind of communication devices to be used. Communication is vital for any organization. In conventional offices, communication takes place within the hierarchical pyramid called the organizational structure, and is known as formal or informal communication. Formal communication is the type that works in specified structures within the organizational rules and towards the organizational goals. Informal communication, on the other hand, is the unofficial type taking place among staff as face-to-face or telephone interaction. Communication in virtual as well as conventional offices is essential for obtaining the right information in administrative activities and decision-making. Virtual communication technologies increase the efficiency of communication especially in virtual teams. Group communication is strengthened through an inter-group central channel. Further, ease of information transmission makes it possible to reach the information at the source, allowing efficient and correct decisions. Virtual offices can present as a whole the elements of information which conventional offices produce in different environments. At present, virtual work has become a reality with its pros and cons, and will probably spread very rapidly in coming years, in line with the growth in information technologies.Keywords: Organization, conventional office, virtual office, communication, communication model, communication functions, communication methods, vertical communication, linear communication, diagonal communication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3165221 Data Privacy and Safety with Large Language Models
Authors: Ashly Joseph, Jithu Paulose
Abstract:
Large language models (LLMs) have revolutionized natural language processing capabilities, enabling applications such as chatbots, dialogue agents, image, and video generators. Nevertheless, their trainings on extensive datasets comprising personal information poses notable privacy and safety hazards. This study examines methods for addressing these challenges, specifically focusing on approaches to enhance the security of LLM outputs, safeguard user privacy, and adhere to data protection rules. We explore several methods including post-processing detection algorithms, content filtering, reinforcement learning from human and AI inputs, and the difficulties in maintaining a balance between model safety and performance. The study also emphasizes the dangers of unintentional data leakage, privacy issues related to user prompts, and the possibility of data breaches. We highlight the significance of corporate data governance rules and optimal methods for engaging with chatbots. In addition, we analyze the development of data protection frameworks, evaluate the adherence of LLMs to General Data Protection Regulation (GDPR), and examine privacy legislation in academic and business policies. We demonstrate the difficulties and remedies involved in preserving data privacy and security in the age of sophisticated artificial intelligence by employing case studies and real-life instances. This article seeks to educate stakeholders on practical strategies for improving the security and privacy of LLMs, while also assuring their responsible and ethical implementation.
Keywords: Data privacy, large language models, artificial intelligence, machine learning, cybersecurity, general data protection regulation, data safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 115