Search results for: Business Process Reengineering
3753 The Analysis of Defects Prediction in Injection Molding
Authors: Mehdi Moayyedian, Kazem Abhary, Romeo Marian
Abstract:
This paper presents an evaluation of a plastic defect in injection molding before it occurs in the process; it is known as the short shot defect. The evaluation of different parameters which affect the possibility of short shot defect is the aim of this paper. The analysis of short shot possibility is conducted via SolidWorks Plastics and Taguchi method to determine the most significant parameters. Finite Element Method (FEM) is employed to analyze two circular flat polypropylene plates of 1 mm thickness. Filling time, part cooling time, pressure holding time, melt temperature and gate type are chosen as process and geometric parameters, respectively. A methodology is presented herein to predict the possibility of the short-shot occurrence. The analysis determined melt temperature is the most influential parameter affecting the possibility of short shot defect with a contribution of 74.25%, and filling time with a contribution of 22%, followed by gate type with a contribution of 3.69%. It was also determined the optimum level of each parameter leading to a reduction in the possibility of short shot are gate type at level 1, filling time at level 3 and melt temperature at level 3. Finally, the most significant parameters affecting the possibility of short shot were determined to be melt temperature, filling time, and gate type.Keywords: Injection molding, plastic defects, short shot, Taguchi method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15393752 Effect of Crude Oil Particle Elasticity on the Separation Efficiency of a Hydrocyclone
Authors: M. H. Narasingha, K. Pana-Suppamassadu, P. Narataruksa
Abstract:
The separation efficiency of a hydrocyclone has extensively been considered on the rigid particle assumption. A collection of experimental studies have demonstrated their discrepancies from the modeling and simulation results. These discrepancies caused by the actual particle elasticity have generally led to a larger amount of energy consumption in the separation process. In this paper, the influence of particle elasticity on the separation efficiency of a hydrocyclone system was investigated through the Finite Element (FE) simulations using crude oil droplets as the elastic particles. A Reitema-s design hydrocyclone with a diameter of 8 mm was employed to investigate the separation mechanism of the crude oil droplets from water. The cut-size diameter eter of the crude oil was 10 - Ðçm in order to fit with the operating range of the adopted hydrocylone model. Typical parameters influencing the performance of hydrocyclone were varied with the feed pressure in the range of 0.3 - 0.6 MPa and feed concentration between 0.05 – 0.1 w%. In the simulation, the Finite Element scheme was applied to investigate the particle-flow interaction occurred in the crude oil system during the process. The interaction of a single oil droplet at the size of 10 - Ðçm to the flow field was observed. The feed concentration fell in the dilute flow regime so the particle-particle interaction was ignored in the study. The results exhibited the higher power requirement for the separation of the elastic particulate system when compared with the rigid particulate system.Keywords: Hydrocyclone, separation efficiency, strain energy density, strain rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18083751 Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology
Authors: Mina Dabirinezhad, Mohsen Bayat Pour, Amin Dabirinejad
Abstract:
This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.
Keywords: Dental direct digital imaging, digital image receptor, digital x-ray machine, and environmental impacts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6073750 JEWEL: A Cosmological Model Due to the Geometrical Displacement of Galactic Object Like Black, White and Worm Holes
Authors: Francesco Pia
Abstract:
Stellar objects such as black, white and worm holes can be the subject of speculative reasoning if represented in a simplified and geometric form in order to be able to move them; and the cosmological model is one of the most important contents in relation to speculations that can then open the way to other aspects that are not strictly speculative but practical, precisely in the Universe represented by us. In this work, thanks to the hypothesis of a very large number of black, white and worm holes present in our Universe, we imagine that they can be moved; it was therefore thought to align them on a plane and following a redistribution, and the boundaries of this plane were ideally joined, giving rise to a sphere that has the stellar objects examined radially distributed. Thanks to geometrical displacements of these stellar objects that do not make each one of them lose their functionality in the region in which they are located, at the end of the speculative process it is possible to highlight a spherical layer that allows a flow from the outside and inside this spherical shell allowing to relate to other external and internal spherical layers; this aspect that seems useful to describe the universe we live in, for example inside one of the spherical shells just described. The name "Jewel" was chosen because, imagining the speculative process present in this work at the end of steps, the cosmological model tends to be "luminous". This cosmological model includes, for each internal part of a generic layer, different and numerous moments of our universe thanks to an eternal flow inward. There are many aspects to explore, one of these is the connection between the outermost and the inside of the spherical layers.
Keywords: Black hole, cosmological model, cosmology, white hole.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5873749 Optimal Prices under Revenue Sharing Contract in a Supply Chain with Direct Channel
Authors: Aussadavut Dumrongsiri
Abstract:
Westudy a dual-channel supply chain under decentralized setting in which manufacturer sells to retailer and to customers directly usingan online channel. A customer chooses the purchase-channel based on price and service quality. Also, to buy product from the retail store, the customer incurs a transportation cost influenced by the fluctuating gasoline cost. Both companies are under the revenue sharing contract. In this contract the retailer share a portion of the revenue to the manufacturer while the manufacturer will charge the lower wholesales price. The numerical result shows that the effects of gasoline costs, the revenue sharing ratio and the wholesale price play an important role in determining optimal prices. The result shows that when the gasoline price fluctuatesthe optimal on-line priceis relatively stable while the optimal retail price moves in the opposite direction of the gasoline prices.Keywords: direct-channel, e-business, pricing model, dualchannel supply chain, gasoline cost, revenue sharing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13353748 Supply Chain Management and E-Commerce Technology Adoption among Logistics Service Providers in Malaysia
Authors: Mohd Iskandar bin Illyas Tan, Iziati Saadah bt Ibrahim
Abstract:
Logistics is part of the supply chain processes that plans, implements, and controls the efficient and effective forward and reverse flow and storage of goods, services, and related information between the point of origin and the point of consumption in order to meet customer requirements. This research aims to investigate the current status and future direction of the use of Information Technology (IT) for logistics, focusing on Supply Chain Management (SCM) and E-Commerce adoption in Malaysia. Therefore, this research stresses on the type of technology being adopted, factors, benefits and barriers affecting the innovation in SCM and E-Commerce technology adoption among Logistics Service Providers (LSP). A mailed questionnaire survey was conducted to collect data from 265 logistics companies in Johor. The research revealed a high level of SCM technology adoption among LSP as they had adopted SCM technology in various business processes while they perceived a high level of benefits from SCM adoption.Keywords: E-Commerce, Logistics Service Providers, Malaysia, Supply Chain Management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49303747 Electronic Transactions: Jurisdictional Issues in the European Union
Authors: Faeze Razmpa
Abstract:
One of the main consequences of the ubiquitous usage of Internet as a means to conduct business has been the progressive internationalization of contracts created to support such transactions. As electronic commerce becomes International commerce, the reality is that commercial disputes will occur creating such questions as: "In which country do I bring proceedings?" and "Which law is to be applied to solve disputes?" The decentralized and global structure of the Internet and its decentralized operation have given e-commerce a transnational element that affects two questions essential to any transaction: applicable law and jurisdiction in the event of dispute. The sharing of applicable law and jurisdiction among States in respect of international transactions traditionally has been based on the use of contact factors generally of a territorial nature (the place where real estate is located, customary residence, principal establishment, place of shipping goods). The characteristics of the Internet as a new space sometimes make it difficult to apply these rules, and may make them inoperative or lead to results that are surprising or totally foreign to the contracting parties and other elements and circumstances of the case.
Keywords: Electronic, European Union, Jurisdiction, Internet
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17883746 Awareness Level of Green Computing among Computer Users in Kebbi State, Nigeria
Authors: A. Mubarak, A. I. Augie
Abstract:
This study investigated the awareness level of green computing possessed by computer users in Kebbi state. Survey method was employed to carry out the study. The study involved computer users from ICT business/training centers around Argungu and Birnin Kebbi areas of Kebbi state. Purposive sampling method was used to draw 156 respondents that volunteer to answer the questionnaire administered for gathering the data of the study. Out of the 156 questionnaires distributed, 121 were used for data analysis. In all, 79 respondents were from Argungu, while 42 were from Birnin Kebbi. The two research questions of the study were answered with descriptive statistic (percentage), and inferential statistics (ANOVA). The findings showed that the most of the computer users do not possess adequate awareness on conscious use of computing system. Also, the study showed that there is no significant difference regarding the consciousness of green computing possesses among computer users in Argungu and Birnin Kebbi. Based on these findings, the study suggested among others an aggressive campaign on green computing practice among computer users in Kebbi state.
Keywords: Green computing, awareness, information technology, Energy Star.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6693745 A Modular On-line Profit Sharing Approach in Multiagent Domains
Authors: Pucheng Zhou, Bingrong Hong
Abstract:
How to coordinate the behaviors of the agents through learning is a challenging problem within multi-agent domains. Because of its complexity, recent work has focused on how coordinated strategies can be learned. Here we are interested in using reinforcement learning techniques to learn the coordinated actions of a group of agents, without requiring explicit communication among them. However, traditional reinforcement learning methods are based on the assumption that the environment can be modeled as Markov Decision Process, which usually cannot be satisfied when multiple agents coexist in the same environment. Moreover, to effectively coordinate each agent-s behavior so as to achieve the goal, it-s necessary to augment the state of each agent with the information about other existing agents. Whereas, as the number of agents in a multiagent environment increases, the state space of each agent grows exponentially, which will cause the combinational explosion problem. Profit sharing is one of the reinforcement learning methods that allow agents to learn effective behaviors from their experiences even within non-Markovian environments. In this paper, to remedy the drawback of the original profit sharing approach that needs much memory to store each state-action pair during the learning process, we firstly address a kind of on-line rational profit sharing algorithm. Then, we integrate the advantages of modular learning architecture with on-line rational profit sharing algorithm, and propose a new modular reinforcement learning model. The effectiveness of the technique is demonstrated using the pursuit problem.Keywords: Multi-agent learning; reinforcement learning; rationalprofit sharing; modular architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14523744 Investigation into the Role of Leadership in the Management of Digital Transformation for Small and Medium Enterprises
Authors: Francesco Coraci, Abdul-Hadi G. Abulrub
Abstract:
Digital technology is transforming the landscape of the industrial sector at a precedential level by connecting people, processes, and machines in real-time. It represents the means for a new pathway to achieve innovative, dynamic competitive advantages, deliver unique customers’ values, and sustain critical relationships. Thus, success in a constantly changing environment is governed by the ability of an organization to revolutionize their business models, deliver innovative solutions, and capture values from big data analytics and insights. Businesses need to re-strategize operations and develop extra capabilities to cope with the necessity for additional flexibility and agility. The traditional “command and control” leadership style is structurally and operationally incompatible with the digital era. In this paper, the authors discuss how transformational leaders can act as a glue in the social, organizational context, which is crucial to enable the workforce and develop a psychological attachment to the digital vision.Keywords: Internet of things, strategy, change leadership, dynamic competitive advantage, digital transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6973743 Removal of Volatile Organic Compounds from Contaminated Surfactant Solution using Co-Curren Vacuum Stripping
Authors: Pornchai Suriya-Amrit, Suratsawadee Kungsanant, Boonyarach Kitiyanan
Abstract:
There has been a growing interest in utilizing surfactants in remediation processes to separate the hydrophobic volatile organic compounds (HVOCs) from aqueous solution. One attractive process is cloud point extraction (CPE), which utilizes nonionic surfactants as a separating agent. Since the surfactant cost is a key determination of the economic viability of the process, it is important that the surfactants are recycled and reused. This work aims to study the performance of the co-current vacuum stripping using a packed column for HVOCs removal from contaminated surfactant solution. Six types HVOCs are selected as contaminants. The studied surfactant is the branched secondary alcohol ethoxylates (AEs), Tergitol TMN-6 (C14H30O2). The volatility and the solubility of HVOCs in surfactant system are determined in terms of an apparent Henry’s law constant and a solubilization constant, respectively. Moreover, the HVOCs removal efficiency of vacuum stripping column is assessed in terms of percentage of HVOCs removal and the overall liquid phase volumetric mass transfer coefficient. The apparent Henry’s law constant of benzenz , toluene, and ethyl benzene were 7.00×10-5, 5.38×10-5, 3.35× 10-5 respectively. The solubilization constant of benzene, toluene, and ethyl benzene were 1.71, 2.68, 7.54 respectively. The HVOCs removal for all solute were around 90 percent.
Keywords: Apparent Henry’s law constant, Branched secondary alcohol ethoxylates, Vacuum Stripping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16953742 Topic Modeling Using Latent Dirichlet Allocation and Latent Semantic Indexing on South African Telco Twitter Data
Authors: Phumelele P. Kubheka, Pius A. Owolawi, Gbolahan Aiyetoro
Abstract:
Twitter is one of the most popular social media platforms where users share their opinions on different subjects. Twitter can be considered a great source for mining text due to the high volumes of data generated through the platform daily. Many industries such as telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model in this experiment. A higher topic coherence score indicates better performance of the model.
Keywords: Big data, latent Dirichlet allocation, latent semantic indexing, Telco, topic modeling, Twitter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4693741 Adjustment of a PET Scanner for PEPT
Authors: Alireza Sadrmomtaz
Abstract:
Positron emission particle tracking (PEPT) is a technique in which a single radioactive tracer particle can be accurately tracked as it moves. A limitation of PET is that in order to reconstruct a tomographic image it is necessary to acquire a large volume of data (millions of events), so it is difficult to study rapidly changing systems. By considering this fact, PEPT is a very fast process compared with PET. In PEPT detecting both photons defines a line and the annihilation is assumed to have occurred somewhere along this line. The location of the tracer can be determined to within a few mm from coincident detection of a small number of pairs of back-to-back gamma rays and using triangulation. This can be achieved many times per second and the track of a moving particle can be reliably followed. This technique was invented at the University of Birmingham [1]. The attempt in PEPT is not to form an image of the tracer particle but simply to determine its location with time. If this tracer is followed for a long enough period within a closed, circulating system it explores all possible types of motion. The application of PEPT to industrial process systems carried out at the University of Birmingham is categorized in two subjects: the behaviour of granular materials and viscous fluids. Granular materials are processed in industry for example in the manufacture of pharmaceuticals, ceramics, food, polymers and PEPT has been used in a number of ways to study the behaviour of these systems [2]. PEPT allows the possibility of tracking a single particle within the bed [3]. Also PEPT has been used for studying systems such as: fluid flow, viscous fluids in mixers [4], using a neutrally-buoyant tracer particle [5].Keywords: PET, BGO, Particle Tracking, ECAT 931, List mode, PEPT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14093740 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils
Authors: Muqdad Al-Juboori, Bithin Datta
Abstract:
Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.Keywords: Artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13823739 The SOCI Strategy as a Method to Meet the Innovation Challenges of COVID-19
Authors: Victoria Wolf, Renata Dobrucka, Robert Prezkop, Stephan Haubold
Abstract:
The COVID-19 causes a worldwide crisis and has an impact in every dimension of the economy. Organizations with the ability to adapt to new developments and which innovate solutions for the disrupted world during and after the Corona crises have the opportunity to not only survive the crisis but rather to use new trends to implement new business models and gain advantage. In this context, startups seem to have better opportunities to manage the Corona crisis through their innovation-based nature. The main result of this paper is the understanding that by applying a startup orientated innovation (SOCI) strategy, established companies can be motivated to meet the challenge of COVID-19 in a similar way like startups. This result can be achieved by describing the role of innovation and a SOCI strategy as helpful methods for organizations to meet the coming challenges during and after the COVID-19 epidemics. In addition to this, this paper presents a practical application of SOCI through the PANDA approach of the Fresenius University of Applied Sciences in Germany and discuss it in the context of COVID-19 as an exemplary successful real-world implementation of SOCI strategy.Keywords: COVID-19, innovation, open innovation, startup, SOCI framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5103738 Visualization and Indexing of Spectral Databases
Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi
Abstract:
On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.
Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17783737 Interpreting Chopin’s Music Today: Mythologization of Art: Kitsch
Authors: Ilona Bala
Abstract:
The subject of this abstract is related to the notion of 'popular music', a notion that should be treated with extreme care, particularly when applied to Frederic Chopin, one of the greatest composers of Romanticism. By ‘popular music’, we mean a category of everyday music, set against the more intellectual kind, referred to as ‘classical’. We only need to look back to the culture of the nineteenth century to realize that this ‘popular music’ refers to the ‘music of the low’. It can be studied from a sociological viewpoint, or as sociological aesthetics. However, we cannot ignore the fact that, very quickly, this music spread to the wealthiest strata of the European society of the nineteenth century, while likewise the lowest classes often listen to the intellectual classical music, so pleasant to listen to. Further, we can observe that a sort of ‘sacralisation of kitsch’ occurs at the intersection between the classical and popular music. This process is the topic of this contribution. We will start by investigating the notion of kitsch through the study of Chopin’s popular compositions. However, before considering the popularisation of this music in today’s culture, we will have to focus on the use of the word kitsch in Chopin’s times, through his own musical aesthetics. Finally, the objective here will be to negate the theory that art is simply the intellectual definition of aesthetics. A kitsch can, obviously, only work on the emotivity of the masses, as it represents one of the features of culture-language (the words which the masses identify with). All art is transformed, becoming something outdated or even outmoded. Here, we are truly within a process of mythologization of art, through the study of the aesthetic reception of the musical work.Keywords: F. Chopin, musical work, popular music, romantic music, mythologization of art, kitsch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12793736 Land Layout and Urban Design of New Cities in Underdeveloped Areas of China: A Case Study of Xixian New Area
Authors: Libin Ouyang
Abstract:
China has experienced a very fast urbanization process in the past two decades. Due to the uncoordinated characteristics of regional development in China, a large number of people from rural areas or small towns have flooded into regional central cities, which are building new cities around them due to the shortage of construction land or the need for urban development. However, the construction of some new cities has not achieved the expected effect, the absorption capacity of industry and population is limited, and the phenomenon of capital and land waste is obvious. This paper takes Xixian New Area in Shaanxi Province, an inland area in Northwest China, as an example, and tries to analyze the reasons for the lack of vitality in Xixian New Area from the perspectives of land use layout and urban design. This paper will also select the Energy-Finance-Trade Start-up Area in Xixian New Area as an important research site, and study how to optimize the land use layout and urban design to ease the population of big cities, effectively solve the problems of big cities, improve the vitality and attractiveness of the new city, and promote the sustainable development of the new city. The study can provide reference for urban planning practitioners and policy makers, provide theoretical help for the construction of new cities in other underdeveloped regions of China, and provide certain case references for the construction of cities in other developing countries in the process of rapid urbanization.
Keywords: New city, land use layout, urban design, urban planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3333735 A Robust Optimization Method for Service Quality Improvement in Health Care Systems under Budget Uncertainty
Authors: H. Ashrafi, S. Ebrahimi, H. Kamalzadeh
Abstract:
With the development of business competition, it is important for healthcare providers to improve their service qualities. In order to improve service quality of a clinic, four important dimensions are defined: tangibles, responsiveness, empathy, and reliability. Moreover, there are several service stages in hospitals such as financial screening and examination. One of the most challenging limitations for improving service quality is budget which impressively affects the service quality. In this paper, we present an approach to address budget uncertainty and provide guidelines for service resource allocation. In this paper, a service quality improvement approach is proposed which can be adopted to multistage service processes to improve service quality, while controlling the costs. A multi-objective function based on the importance of each area and dimension is defined to link operational variables to service quality dimensions. The results demonstrate that our approach is not ultra-conservative and it shows the actual condition very well. Moreover, it is shown that different strategies can affect the number of employees in different stages.
Keywords: Service quality assessment, healthcare resource allocation, robust optimization, budget uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11293734 Sensitivity of the SHARC Model to Variations of Manning Coefficient and Effect of “n“ on the Sediment Materials Entry into the Eastern Water intake- A Case in the Dez Diversion Weir in Iran
Authors: M.R.Mansoujian, A.Rohani, N.Hedayat , M.Qamari, M. Osroosh
Abstract:
Permanent rivers are the main sources of renewable water supply for the croplands under the irrigation and drainage schemes. They are also the major source of sediment loads transport into the storage reservoirs of the hydro-electrical dams, diversion weirs and regulating dams. Sedimentation process results from soil erosion which is related to poor watershed management and human intervention ion in the hydraulic regime of the rivers. These could change the hydraulic behavior and as such, leads to riverbed and river bank scouring, the consequences of which would be sediment load transport into the dams and therefore reducing the flow discharge in water intakes. The present paper investigate sedimentation process by varying the Manning coefficient "n" by using the SHARC software along the watercourse in the Dez River. Results indicated that the optimum "n" within that river range is 0.0315 at which quantity minimum sediment loads are transported into the Eastern intake. Comparison of the model results with those obtained by those from the SSIIM software within the same river reach showed a very close proximity between them. This suggests a relative accuracy with which the model can simulate the hydraulic flow characteristics and therefore its suitability as a powerful analytical tool for project feasibility studies and project implementation.Keywords: Sediment transport, Manning coefficient, Eastern Intake, SHARC, Dez River.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16883733 Establishing a Change Management Model for Precision Machinery Industry in Taiwan
Authors: Feng-Tsung Cheng, Shu-Li Wang, Mei-Fang Wu, Hui-Yu Chuang
Abstract:
The rapid development technology and widespread Internet make business environment changing a lot. In order to stand in the global market and to keep subsistence, “changing” is unspoken rule for the company’s survival. The purpose of this paper is building up change model by using SWOT, strategy map, KPI and change management theory. The research findings indicate that the company needs to deal with employee’s resistance emotion firstly before building up change model. The ways of providing performance appraisal reward, consulting and counseling mechanisms that will great help to achieve reducing staff negative emotions and motivate staff’s efficiencies also. To revise strategy map, modify corporate culture, and improve internal operational processes which is based on change model. Through the change model, the increasing growth rate of net income helps company to achieve the goals and be a leading brand of precision machinery industry.
Keywords: Organizational change, SWOT analysis, strategy maps, performance indicators.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18013732 A Comparative Analysis of Multiple Criteria Decision Making Analysis Methods for Strategic, Tactical, and Operational Decisions in Military Fighter Aircraft Selection
Authors: C. Ardil
Abstract:
This paper considers a comparative analysis of multiple criteria decision making analysis methods for strategic, tactical, and operational decisions in military fighter aircraft selection for the air force fleet planning. The evaluation criteria governing the decision analysis process are determined from the literature for the three existing military combat aircraft. Military fighter aircraft selection problem is structured using "preference analysis for reference ideal solution (PARIS)” approach in multiple criteria decision analysis (MCDMA). Systematic comparisons were made with existing MCDMA methods (PARIS, and TOPSIS) to verify the stability and accuracy of the results obtained. The proposed integrated MCDMA systematic approach is expected to address the issues encountered in the aircraft selection process. The comparative analysis results show that the proposed method is an effective and accurate tool that can help analysts make better strategic, tactical, and operational decisions.
Keywords: aircraft, military fighter aircraft selection, multiple criteria decision making, multiple criteria decision making analysis, mean weight, entropy weight, MCDMA, PARIS, TOPSIS, Saab Gripen, Dassault Rafale, Eurofighter Typhoon
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5783731 Inferring User Preference Using Distance Dependent Chinese Restaurant Process and Weighted Distribution for a Content Based Recommender System
Authors: Bagher Rahimpour Cami, Hamid Hassanpour, Hoda Mashayekhi
Abstract:
Nowadays websites provide a vast number of resources for users. Recommender systems have been developed as an essential element of these websites to provide a personalized environment for users. They help users to retrieve interested resources from large sets of available resources. Due to the dynamic feature of user preference, constructing an appropriate model to estimate the user preference is the major task of recommender systems. Profile matching and latent factors are two main approaches to identify user preference. In this paper, we employed the latent factor and profile matching to cluster the user profile and identify user preference, respectively. The method uses the Distance Dependent Chines Restaurant Process as a Bayesian nonparametric framework to extract the latent factors from the user profile. These latent factors are mapped to user interests and a weighted distribution is used to identify user preferences. We evaluate the proposed method using a real-world data-set that contains news tweets of a news agency (BBC). The experimental results and comparisons show the superior recommendation accuracy of the proposed approach related to existing methods, and its ability to effectively evolve over time.Keywords: Content-based recommender systems, dynamic user modeling, extracting user interests, predicting user preference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8233730 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests
Authors: Julius Onyancha, Valentina Plekhanova
Abstract:
One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.Keywords: Web log data, web user profile, user interest, noise web data learning, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17393729 Multidimensional Compromise Programming Evaluation of Digital Commerce Websites
Authors: C. Ardil
Abstract:
Multidimensional compromise programming evaluation of digital commerce websites is essential not only to have recommendations for improvement, but also to make comparisons with global business competitors. This research provides a multidimensional decision making model that prioritizes the objective criteria weights of various commerce websites using multidimensional compromise solution. Evaluation of digital commerce website quality can be considered as a complex information system structure including qualitative and quantitative factors for a multicriteria decision making problem. The proposed multicriteria decision making approach mainly consists of three sequential steps for the selection problem. In the first step, three major different evaluation criteria are characterized for website ranking problem. In the second step, identified critical criteria are weighted using the standard deviation procedure. In the third step, the multidimensional compromise programming is applied to rank the digital commerce websites.
Keywords: Standard deviation, commerce website, website evaluation, multicriteria decision making, multicriteria compromise programming, website quality, multidimensional decision analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8213728 Assessment Power and Frequency Oscillation Damping Using POD Controller and Proposed FOD Controller
Authors: Yahya Naderi, Tohid Rahimi, Babak Yousefi, Seyed Hossein Hosseini
Abstract:
Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. But FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. But Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. So FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.
Keywords: Power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31693727 Relationship between Personality Traits and Postural Stability among Czech Military Combat Troops
Authors: K. Rusnakova, D. Gerych, M. Stehlik
Abstract:
Postural stability is a complex process involving actions of biomechanical, motor, sensory and central nervous system components. Numerous joint systems, muscles involved, the complexity of sporting movements and situations require perfect coordination of the body's movement patterns. To adapt to a constantly changing situation in such a dynamic environment as physical performance, optimal input of information from visual, vestibular and somatosensory sensors are needed. Combat soldiers are required to perform physically and mentally demanding tasks in adverse conditions, and poor postural stability has been identified as a risk factor for lower extremity musculoskeletal injury. The aim of this study is to investigate whether some personality traits are related to the performance of static postural stability among soldiers of combat troops. NEO personality inventory (NEO-PI-R) was used to identify personality traits and the Nintendo Wii Balance Board was used to assess static postural stability of soldiers. Postural stability performance was assessed by changes in center of pressure (CoP) and center of gravity (CoG). A posturographic test was performed for 60 s with eyes opened during quiet upright standing. The results showed that facets of neuroticism and conscientiousness personality traits were significantly correlated with measured parameters of CoP and CoG. This study can help for better understanding the relationship between personality traits and static postural stability. The results can be used to optimize the training process at the individual level.Keywords: Neuroticism, conscientiousness, postural stability, combat troops.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5193726 Optimal Construction Using Multi-Criteria Decision-Making Methods
Authors: Masood Karamoozian, Zhang Hong
Abstract:
The necessity and complexity of the decision-making process and the interference of the various factors to make decisions and consider all the relevant factors in a problem are very obvious nowadays. Hence, researchers show their interest in multi-criteria decision-making methods. In this research, the Analytical Hierarchy Process (AHP), Simple Additive Weighting (SAW), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods of multi-criteria decision-making have been used to solve the problem of optimal construction systems. Systems being evaluated in this problem include; Light Steel Frames (LSF), a case study of designs by Zhang Hong studio in the Southeast University of Nanjing, Insulating Concrete Form (ICF), Ordinary Construction System (OCS), and Precast Concrete System (PRCS) as another case study designs in Zhang Hong studio in the Southeast University of Nanjing. Crowdsourcing was done by using a questionnaire at the sample level (200 people). Questionnaires were distributed among experts in university centers and conferences. According to the results of the research, the use of different methods of decision-making led to relatively the same results. In this way, with the use of all three multi-criteria decision-making methods mentioned above, the PRCS was in the first rank, and the LSF system ranked second. Also, the PRCS, in terms of performance standards and economics, was ranked first, and the LSF system was allocated the first rank in terms of environmental standards.
Keywords: Multi-criteria decision making, AHP, SAW, TOPSIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663725 Application of Powder Metallurgy Technologies for Gas Turbine Engine Wheel Production
Authors: Liubov Magerramova, Eugene Kratt, Pavel Presniakov
Abstract:
A detailed analysis has been performed for several schemes of Gas Turbine Wheels production based on additive and powder technologies including metal, ceramic, and stereolithography 3-D printing. During the process of development and debugging of gas turbine engine components, different versions of these components must be manufactured and tested. Cooled blades of the turbine are among of these components. They are usually produced by traditional casting methods. This method requires long and costly design and manufacture of casting molds. Moreover, traditional manufacturing methods limit the design possibilities of complex critical parts of engine, so capabilities of Powder Metallurgy Techniques (PMT) were analyzed to manufacture the turbine wheel with air-cooled blades. PMT dramatically reduce time needed for such production and allow creating new complex design solutions aimed at improving the technical characteristics of the engine: improving fuel efficiency and environmental performance, increasing reliability, and reducing weight. To accelerate and simplify the blades manufacturing process, several options based on additive technologies were used. The options were implemented in the form of various casting equipment for the manufacturing of blades. Methods of powder metallurgy were applied for connecting the blades with the disc. The optimal production scheme and a set of technologies for the manufacturing of blades and turbine wheel and other parts of the engine can be selected on the basis of the options considered.Keywords: Additive technologies, gas turbine engine, powder technology, turbine wheel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19243724 A Real-Time Simulation Environment for Avionics Software Development and Qualification
Authors: U. Tancredi, D. Accardo, M. Grassi, G. Fasano, A. E. Tirri, A. Vitale, N. Genito, F. Montemari, L. Garbarino
Abstract:
The development of guidance, navigation and control algorithms and avionic procedures requires the disposability of suitable analysis and verification tools, such as simulation environments, which support the design process and allow detecting potential problems prior to the flight test, in order to make new technologies available at reduced cost, time and risk. This paper presents a simulation environment for avionic software development and qualification, especially aimed at equipment for general aviation aircrafts and unmanned aerial systems. The simulation environment includes models for short and medium-range radio-navigation aids, flight assistance systems, and ground control stations. All the software modules are able to simulate the modeled systems both in fast-time and real-time tests, and were implemented following component oriented modeling techniques and requirement based approach. The paper describes the specific models features, the architectures of the implemented software systems and its validation process. Performed validation tests highlighted the capability of the simulation environment to guarantee in real-time the required functionalities and performance of the simulated avionics systems, as well as to reproduce the interaction between these systems, thus permitting a realistic and reliable simulation of a complete mission scenario.
Keywords: ADS-B, avionics, NAVAIDs, real time simulation, TCAS, UAS ground control station.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 865