Search results for: child case management
1188 Disturbances of the Normal Operation of Kosovo Power System Regarding Atmospheric Discharges
Authors: B. Prebreza, I. Krasniqi, G. Kabashi, G. Pula, N. Avdiu
Abstract:
This paper discusses aspects of outages in the electric transmission network in the Kosovo Power System caused by the atmospheric discharges.
Frequency and location of the atmospheric discharges in Kosovo territory will be provided by a lightning location system ALARM (Automated Lightning Alert and Risk Management) and from the data from the Meteorological Department in Prishtina International Airport. These data will be used to make comparisons with the actual outages registered in the Kosovo Power System from the Kosovo Transmission, systems and market operator (KOSTT) during a specific time period.
The lines with the worst performance determined, regarding the atmospheric discharges, will be choose for further discussions in terms of over voltages caused by the direct or indirect lightning strokes.
Recommendations for protection in terms of insulator coordination and surge arresters will be given at the end and in this stage dynamic simulation will take part.
Keywords: Atmospheric discharges, dynamic simulations, Kosovo Power System, surge arresters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18391187 GA Based Optimal Feature Extraction Method for Functional Data Classification
Authors: Jun Wan, Zehua Chen, Yingwu Chen, Zhidong Bai
Abstract:
Classification is an interesting problem in functional data analysis (FDA), because many science and application problems end up with classification problems, such as recognition, prediction, control, decision making, management, etc. As the high dimension and high correlation in functional data (FD), it is a key problem to extract features from FD whereas keeping its global characters, which relates to the classification efficiency and precision to heavens. In this paper, a novel automatic method which combined Genetic Algorithm (GA) and classification algorithm to extract classification features is proposed. In this method, the optimal features and classification model are approached via evolutional study step by step. It is proved by theory analysis and experiment test that this method has advantages in improving classification efficiency, precision and robustness whereas using less features and the dimension of extracted classification features can be controlled.Keywords: Classification, functional data, feature extraction, genetic algorithm, wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15551186 Optimal Route Policy in Air Traffic Control with Competing Airlines
Authors: Siliang Wang, Minghui Wang
Abstract:
This work proposes a novel market-based air traffic flow control model considering competitive airlines in air traffic network. In the flow model, an agent based framework for resources (link/time pair) pricing is described. Resource agent and auctioneer for groups of resources are also introduced to simulate the flow management in Air Traffic Control (ATC). Secondly, the distributed group pricing algorithm is introduced, which efficiently reflect the competitive nature of the airline industry. Resources in the system are grouped according to the degree of interaction, and each auctioneer adjust s the price of one group of resources respectively until the excess demand of resources becomes zero when the demand and supply of resources of the system changes. Numerical simulation results show the feasibility of solving the air traffic flow control problem using market mechanism and pricing algorithms on the air traffic network.
Keywords: Air traffic control, Nonlinear programming, Marketmechanism, Route policy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18221185 Organic Agriculture Harmony in Nutrition, Environment and Health: Case Study in Iran
Authors: Sara Jelodarian
Abstract:
Organic agriculture is a kind of living and dynamic agriculture that was introduced in the early 20th century. The fundamental basis for organic agriculture is in harmony with nature. This version of farming emphasizes removing growth hormones, chemical fertilizers, toxins, radiation, genetic manipulation and instead, integration of modern scientific techniques (such as biologic and microbial control) that leads to the production of healthy food and the preservation of the environment and use of agricultural products such as forage and manure. Supports from governments for the markets producing organic products and taking advantage of the experiences from other successful societies in this field can help progress the positive and effective aspects of this technology, especially in developing countries. This research proves that till 2030, 25% of the global agricultural lands would be covered by organic farming. Consequently Iran, due to its rich genetic resources and various climates, can be a pioneer in promoting organic products. In addition, for sustainable farming, blend of organic and other innovative systems is needed. Important limitations exist to accept these systems, also a diversity of policy instruments will be required to comfort their development and implementation. The paper was conducted to results of compilation of reports, issues, books, articles related to the subject with library studies and research. Likewise we combined experimental and survey to get data.
Keywords: Development, production markets, progress, strategic role, technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4981184 Multistage Data Envelopment Analysis Model for Malmquist Productivity Index Using Grey's System Theory to Evaluate Performance of Electric Power Supply Chain in Iran
Authors: Mesbaholdin Salami, Farzad Movahedi Sobhani, Mohammad Sadegh Ghazizadeh
Abstract:
Evaluation of organizational performance is among the most important measures that help organizations and entities continuously improve their efficiency. Organizations can use the existing data and results from the comparison of units under investigation to obtain an estimation of their performance. The Malmquist Productivity Index (MPI) is an important index in the evaluation of overall productivity, which considers technological developments and technical efficiency at the same time. This article proposed a model based on the multistage MPI, considering limited data (Grey’s theory). This model can evaluate the performance of units using limited and uncertain data in a multistage process. It was applied by the electricity market manager to Iran’s electric power supply chain (EPSC), which contains uncertain data, to evaluate the performance of its actors. Results from solving the model showed an improvement in the accuracy of future performance of the units under investigation, using the Grey’s system theory. This model can be used in all case studies, in which MPI is used and there are limited or uncertain data.
Keywords: Malmquist Index, Grey's Theory, Charnes Cooper & Rhodes (CCR) Model, network data envelopment analysis, Iran electricity power chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5531183 Unsteady Flow of an Incompressible Elastico-Viscous Fluid of Second order Type in Tube of Ellipsoidal Cross Section on a Porous Boundary
Authors: Sanjay Baburao Kulkarni
Abstract:
Exact solution of an unsteady flow of elastico-viscous fluid through a porous media in a tube of ellipsoidal cross section under the influence of constant pressure gradient has been obtained in this paper. Initially, the flow is generated by a constant pressure gradient. After attaining the steady state, the pressure gradient is suddenly withdrawn and the resulting fluid motion in a tube of ellipsoidal cross section by taking into account of the porosity factor of the bounding surface is investigated. The problem is solved in twostages the first stage is a steady motion in tube under the influence of a constant pressure gradient, the second stage concern with an unsteady motion. The problem is solved employing separation of variables technique. The results are expressed in terms of a nondimensional porosity parameter (K) and elastico-viscosity parameter (β), which depends on the Non-Newtonian coefficient. The flow parameters are found to be identical with that of Newtonian case as elastic-viscosity parameter tends to zero and porosity tends to infinity. It is seen that the effect of elastico-viscosity parameter and the porosity parameter of the bounding surface has significant effect on the velocity parameter.
Keywords: Elastico-viscous fluid, Ellipsoidal cross-section, Porous media, Second order fluids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16931182 A Method for 3D Mesh Adaptation in FEA
Authors: S. Sfarni, E. Bellenger, J. Fortin, M. Guessasma
Abstract:
The use of the mechanical simulation (in particular the finite element analysis) requires the management of assumptions in order to analyse a real complex system. In finite element analysis (FEA), two modeling steps require assumptions to be able to carry out the computations and to obtain some results: the building of the physical model and the building of the simulation model. The simplification assumptions made on the analysed system in these two steps can generate two kinds of errors: the physical modeling errors (mathematical model, domain simplifications, materials properties, boundary conditions and loads) and the mesh discretization errors. This paper proposes a mesh adaptive method based on the use of an h-adaptive scheme in combination with an error estimator in order to choose the mesh of the simulation model. This method allows us to choose the mesh of the simulation model in order to control the cost and the quality of the finite element analysis.
Keywords: Finite element, discretization errors, adaptivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14781181 Performance Evaluation of a Limited Round-Robin System
Authors: Yoshiaki Shikata
Abstract:
Performance of a limited Round-Robin (RR) rule is studied in order to clarify the characteristics of a realistic sharing model of a processor. Under the limited RR rule, the processor allocates to each request a fixed amount of time, called a quantum, in a fixed order. The sum of the requests being allocated these quanta is kept below a fixed value. Arriving requests that cannot be allocated quanta because of such a restriction are queued or rejected. Practical performance measures, such as the relationship between the mean sojourn time, the mean number of requests, or the loss probability and the quantum size are evaluated via simulation. In the evaluation, the requested service time of an arriving request is converted into a quantum number. One of these quanta is included in an RR cycle, which means a series of quanta allocated to each request in a fixed order. The service time of the arriving request can be evaluated using the number of RR cycles required to complete the service, the number of requests receiving service, and the quantum size. Then an increase or decrease in the number of quanta that are necessary before service is completed is reevaluated at the arrival or departure of other requests. Tracking these events and calculations enables us to analyze the performance of our limited RR rule. In particular, we obtain the most suitable quantum size, which minimizes the mean sojourn time, for the case in which the switching time for each quantum is considered.Keywords: Limited RR rule, quantum, processor sharing, sojourn time, performance measures, simulation, loss probability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12461180 A Tool for Rational Assessment of Dynamic Trust in Networked Organizations
Authors: Simon Samwel Msanjila
Abstract:
Networked environments which provide platforms for business organizations are configured in different forms depending on many factors including life time, member characteristics, communication structure, and business objectives, among others. With continuing advances in digital technologies the distance has become a less barrier for business minded collaboration among organizations. With the need and ease to make business collaborate nowadays organizations are sometimes forced to co-work with others that are either unknown or less known to them in terms of history and performance. A promising approach for sustaining established collaboration has been establishment of trust relationship among organizations based on assessed trustworthiness for each participating organization. It has been stated in research that trust in organization is dynamic and thus assessment of trust level must address such dynamic nature. This paper assesses relevant aspects of trust and applies the assessed concepts to propose a semi-automated system for the management of Sustainability and Evolution of trust in organizations participating in specific objective in a networked organizations environment.Keywords: Trust evolution, trust sustainability, networked organizations, dynamic trust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17631179 Optimum Replacement Policies for Kuwait Passenger Transport Company Busses: Case Study
Authors: Hilal A. Abdelwali, Elsayed E.M. Ellaimony, Ahmad E.M. Murad, Jasem M.S. Al-Rajhi
Abstract:
Due to the excess of a vehicle operation through its life, some elements may face failure and deteriorate with time. This leads us to carry out maintenance, repair, tune up or full overhaul. After a certain period, the vehicle elements deteriorations increase with time which causes a very high increase of doing the maintenance operations and their costs. However, the logic decision at this point is to replace the current vehicle by a new one with minimum failure and maximum income. The importance of studying vehicle replacement problems come from the increase of stopping days due to many deteriorations in the vehicle parts. These deteriorations increase year after year causing an increase of operating costs and decrease the vehicle income. Vehicle replacement aims to determine the optimum time to keep, maintain, overhaul, renew and replace vehicles. This leads to an improvement in vehicle income, total operating costs, maintenance cost, fuel and oil costs, ton-kilometers, vehicle and engine performance, vehicle noise, vibration, and pollution. The aim of this paper is to find the optimum replacement policies of Kuwait Passenger Transport Company (KPTCP) fleet of busses. The objective of these policies is to maximize the busses pure profits. The dynamic programming (D.P.) technique is used to generate the busses optimal replacement policies
Keywords: Replacement Problem, Automotive Replacement, Dynamic Programming, Equipment Replacement, K.P.T.C.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15301178 Resilience Assessment for Power Distribution Systems
Authors: Berna Eren Tokgoz, Mahdi Safa, Seokyon Hwang
Abstract:
Power distribution systems are essential and crucial infrastructures for the development and maintenance of a sustainable society. These systems are extremely vulnerable to various types of natural and man-made disasters. The assessment of resilience focuses on preparedness and mitigation actions under pre-disaster conditions. It also concentrates on response and recovery actions under post-disaster situations. The aim of this study is to present a methodology to assess the resilience of electric power distribution poles against wind-related events. The proposed methodology can improve the accuracy and rapidity of the evaluation of the conditions and the assessment of the resilience of poles. The methodology provides a metric for the evaluation of the resilience of poles under pre-disaster and post-disaster conditions. The metric was developed using mathematical expressions for physical forces that involve various variables, such as physical dimensions of the pole, the inclination of the pole, and wind speed. A three-dimensional imaging technology (photogrammetry) was used to determine the inclination of poles. Based on expert opinion, the proposed metric was used to define zones to visualize resilience. Visual representation of resilience is helpful for decision makers to prioritize their resources before and after experiencing a wind-related disaster. Multiple electric poles in the City of Beaumont, TX were used in a case study to evaluate the proposed methodology.
Keywords: Photogrammetry, power distribution systems, resilience metric, system resilience, wind-related disasters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14211177 Modeling “Web of Trust“ with Web 2.0
Authors: Omer Mahmood, Selvakennedy Selvadurai
Abstract:
“Web of Trust" is one of the recognized goals for Web 2.0. It aims to make it possible for the people to take responsibility for what they publish on the web, including organizations, businesses and individual users. These objectives, among others, drive most of the technologies and protocols recently standardized by the governing bodies. One of the great advantages of Web infrastructure is decentralization of publication. The primary motivation behind Web 2.0 is to assist the people to add contents for Collective Intelligence (CI) while providing mechanisms to link content with people for evaluations and accountability of information. Such structure of contents will interconnect users and contents so that users can use contents to find participants and vice versa. This paper proposes conceptual information storage and linking model, based on decentralized information structure, that links contents and people together. The model uses FOAF, Atom, RDF and RDFS and can be used as a blueprint to develop Web 2.0 applications for any e-domain. However, primary target for this paper is online trust evaluation domain. The proposed model targets to assist the individuals to establish “Web of Trust" in online trust domain.Keywords: Web of Trust, Semantic Web, Electronic SocialNetworks, Information Management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22221176 Vermicomposting of Waste Corn Pulp Blended with Cow Dung Manure using Eisenia Fetida
Authors: Musaida M. M. Manyuchi, Anthony Phiri, Ngoni Chirinda, Perkins Muredzi, Joseph Govhaand, Thamary Sengudzwa
Abstract:
Waste corn pulp was investigated as a potential feedstock during vermicomposting using Eisenia fetida. Corn pulp is the major staple food in Southern Africa and constitutes about 25% of the total organic waste. Wastecooked corn pulp was blended with cow dung in the ratio 6:1 respectively to optimize the vermicomposting process. The feedstock was allowed to vermicompost for 30 days. The vermicomposting took place in a 3- tray plastic worm bin. Moisture content, temperature, pH, and electrical conductivity were monitoreddaily. The NPK content was determined at day 30. During vermicomposting, moisture content increased from 27.68% to 52.41%, temperature ranged between 19- 25◦C, pH increased from 5.5 to 7.7, and electrical conductivity decreased from 80000μS/cm to 60000μS/cm. The ash content increased from 11.40% to 28.15%; additionally the volatile matter increased from 1.45% to 10.02%. An odorless, dark brown vermicompost was obtained. The vermicompost NPK content was 4.19%, 1.15%, and 6.18% respectively.
Keywords: Corn pulp, Eisenia fetida, vermicomposting, waste management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33181175 Identifying Quality Islamic Content in Community Question Answering Sites
Authors: Rabia Bibi, Muhammad Shahzad Faisal, Khalid Iqbal, Atif Inayat
Abstract:
Internet is growing rapidly and new community-based content is added by people every second. With this fast-growing community-based content, if a user requires answers of particular questions, then reviews are required from experts or community. However, it is difficult to get quality answers. The Muslim community all over the world is seeking help to get their questions and issues discussed to get answers. Online web portals of religious schools and community-based question answering sites are two big platforms to solve the issues of users. In the case of religious schools, there are experts and qualified religious scholars (mufti) who can give the expert opinion. However, the quality of community-based content cannot be guaranteed as it may not be an answer that satisfies the question of a user. Users on CQA sites may include spammers or individual criticizing the questioner instead of providing useful answers. In this paper, we research strategies to naturally distinguish the right content. As an experiment, we concentrate on Yahoo! Answers, and Quora, popular online QA sites, where questions are asked, answered, edited, and organized by a large community of users. We present the classification of data to categorize both relevant and irrelevant answers. Specifically, we demonstrate that the proposed framework can isolate quality answers from the rest with an exactness near that of people.
Keywords: Community-based question and answering, evaluation and prediction of quality answer, answer classification, Islamic content, answer ranking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 801174 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems
Authors: Kyoung-jae Kim
Abstract:
Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21461173 Investigation of the Space in Response to the Conditions Caused by the Pandemics and Presenting Five-Scale Design Guidelines to Adapt and Prepare to Face the Pandemics
Authors: Sara Ramezanzadeh, Nashid Nabian
Abstract:
Historically, pandemics in different periods have caused compulsory changes in human life. In the case of COVID-19, according to the limitations and established care instructions, spatial alignment with the conditions is important. Following the outbreak of COVID-19, the question raised in this study is how to do spatial design in five scales, namely object, space, architecture, city, and infrastructure, in response to the consequences created in the realms under study. From the beginning of the pandemic until now, some changes in the spatial realm have been created spontaneously or by space users. These transformations have been mostly applied in modifiable parts such as furniture arrangement, especially in work-related spaces. To implement other comprehensive requirements, flexibility and adaptation of space design to the conditions resulting from the pandemics are needed during and after the outbreak. Studying the effects of pandemics from the past to the present, this research covers eight major realms, including three categories of ramifications, solutions, and paradigm shifts, and analytical conclusions about the solutions that have been created in response to them. Finally, by the consideration of epidemiology as a modern discipline influencing the design, spatial solutions in the five scales mentioned (in response to the effects of the eight realms for spatial adaptation in the face of pandemics and their following conditions) are presented as a series of guidelines. Due to the unpredictability of possible pandemics in the future, the possibility of changing and updating the provided guidelines is considered.
Keywords: Pandemics, COVID-19, spatial design, ramifications, paradigm shifts, guidelines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721172 The Critical Success Factors for Effective ICT Governance in Malaysian Public Sector: A Delphi Study
Authors: Rosida Ab. Razak, Mohamad Shanudin Zakaria
Abstract:
The fundamental issues in ICT Governance (ICTG) implementation for Malaysian Public Sector (MPS) is how ICT be applied to support improvements in productivity, management effectiveness and the quality of services offered to its citizens. Our main concern is to develop and adopt a common definition and framework to illustrate how ICTG can be used to better align ICT with government’s operations and strategic focus. In particular, we want to identify and categorize factors that drive a successful ICTG process. This paper presents the results of an exploratory study to identify, validate and refine such Critical Success Factors (CSFs) and confirmed seven CSFs and nineteen sub-factors as influential factors that fit MPS after further validated and refined. The Delphi method applied in validation and refining process before being endorsed as appropriate for MPS. The identified CSFs reflect the focus areas that need to be considered strategically to strengthen ICT Governance implementation and ensure business success.
Keywords: IT Governance, Critical Success Factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39781171 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.
Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13811170 Determination of the Best Fit Probability Distribution for Annual Rainfall in Karkheh River at Iran
Authors: Karim Hamidi Machekposhti, Hossein Sedghi
Abstract:
This study was designed to find the best-fit probability distribution of annual rainfall based on 50 years sample (1966-2015) in the Karkheh river basin at Iran using six probability distributions: Normal, 2-Parameter Log Normal, 3-Parameter Log Normal, Pearson Type 3, Log Pearson Type 3 and Gumbel distribution. The best fit probability distribution was selected using Stormwater Management and Design Aid (SMADA) software and based on the Residual Sum of Squares (R.S.S) between observed and estimated values Based on the R.S.S values of fit tests, the Log Pearson Type 3 and then Pearson Type 3 distributions were found to be the best-fit probability distribution at the Jelogir Majin and Pole Zal rainfall gauging station. The annual values of expected rainfall were calculated using the best fit probability distributions and can be used by hydrologists and design engineers in future research at studied region and other region in the world.
Keywords: Log Pearson Type 3, SMADA, rainfall, Karkheh River.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7541169 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17021168 Analysis of Risk-Based Disaster Planning in Local Communities
Authors: R. A. Temah, L. A. Nkengla-Asi
Abstract:
Planning for future disasters sets the stage for a variety of activities that may trigger multiple recurring operations and expose the community to opportunities to minimize risks. Local communities are increasingly embracing the necessity for planning based on local risks, but are also significantly challenged to effectively plan and response to disasters. This research examines basic risk-based disaster planning model and compares it with advanced risk-based planning that introduces the identification and alignment of varieties of local capabilities within and out of the local community that can be pivotal to facilitate the management of local risks and cascading effects prior to a disaster. A critical review shows that the identification and alignment of capabilities can potentially enhance risk-based disaster planning. A tailored holistic approach to risk based disaster planning is pivotal to enhance collective action and a reduction in disaster collective cost.
Keywords: Capabilities, disaster planning, hazards, local community, risk-based.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10681167 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space
Authors: Nanjiang Chen
Abstract:
In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experience of space. Addressing these gaps, this paper presents a continuous visibility algorithm, providing a potentially valuable approach to measuring urban spaces from a human - centric perspective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this technique allows for a continuous range of visibility assessment, closely mirroring human visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Beijing's urban setting. Its key distinction lies in its potential to benefit a broad spectrum of stakeholders, ranging from urban developers to public policymakers, aiding in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.
Keywords: Visual openness, spatial continuity, ray-tracing algorithms, urban computation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 301166 Diversity for Safety and Security of Autonomous Vehicles against Accidental and Deliberate Faults
Authors: Anil Ranjitbhai Patel, Clement John Shaji, Peter Liggesmeyer
Abstract:
Safety and security of Autonomous Vehicles (AVs) is a growing concern, first, due to the increased number of safety-critical functions taken over by automotive embedded systems; second, due to the increased exposure of the software-intensive systems to potential attackers; third, due to dynamic interaction in an uncertain and unknown environment at runtime which results in changed functional and non-functional properties of the system. Frequently occurring environmental uncertainties, random component failures, and compromise security of the AVs might result in hazardous events, sometimes even in an accident, if left undetected. Beyond these technical issues, we argue that the safety and security of AVs against accidental and deliberate faults are poorly understood and rarely implemented. One possible way to overcome this is through a well-known diversity approach. As an effective approach to increase safety and security, diversity has been widely used in the aviation, railway, and aerospace industries. Thus, paper proposes fault-tolerance by diversity model taking into consideration the mitigation of accidental and deliberate faults by application of structure and variant redundancy. The model can be used to design the AVs with various types of diversity in hardware and software-based multi-version system. The paper evaluates the presented approach by employing an example from adaptive cruise control, followed by discussing the case study with initial findings.
Keywords: Autonomous vehicles, diversity, fault-tolerance, adaptive cruise control, safety, security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4871165 Error Rate Probability for Coded MQAM with MRC Diversity in the Presence of Cochannel Interferers over Nakagami-Fading Channels
Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal
Abstract:
Exact expressions for bit-error probability (BEP) for coherent square detection of uncoded and coded M-ary quadrature amplitude modulation (MQAM) using an array of antennas with maximal ratio combining (MRC) in a flat fading channel interference limited system in a Nakagami-m fading environment is derived. The analysis assumes an arbitrary number of independent and identically distributed Nakagami interferers. The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM by plotting error probabilities versus average signal-to-interference ratio (SIR) for various values of order of diversity N, number of distinct symbols M, in order to examine the effect of cochannel interferers on the performance of the digital communication system. The diversity gains and net gains are also presented in tabular form in order to examine the performance of digital communication system in the presence of interferers, as the order of diversity increases. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems with space diversity in wireless fading channels.Keywords: Cochannel interference, maximal ratio combining, Nakagami-m fading, wireless digital communications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18541164 Optimization of End Milling Process Parameters for Minimization of Surface Roughness of AISI D2 Steel
Authors: Pankaj Chandna, Dinesh Kumar
Abstract:
The present work analyses different parameters of end milling to minimize the surface roughness for AISI D2 steel. D2 Steel is generally used for stamping or forming dies, punches, forming rolls, knives, slitters, shear blades, tools, scrap choppers, tyre shredders etc. Surface roughness is one of the main indices that determines the quality of machined products and is influenced by various cutting parameters. In machining operations, achieving desired surface quality by optimization of machining parameters, is a challenging job. In case of mating components the surface roughness become more essential and is influenced by the cutting parameters, because, these quality structures are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects (i.e. on process environment). In this work, the effects of selected process parameters on surface roughness and subsequent setting of parameters with the levels have been accomplished by Taguchi’s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L9 orthogonal array. Experimental investigation of the end milling of AISI D2 steel with carbide tool by varying feed, speed and depth of cut and the surface roughness has been measured using surface roughness tester. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the contribution of the different process parameters on the process.
Keywords: D2 Steel, Orthogonal Array, Optimization, Surface Roughness, Taguchi Methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27681163 Machine Learning Based Approach for Measuring Promotion Effectiveness in Multiple Parallel Promotions’ Scenarios
Authors: Revoti Prasad Bora, Nikita Katyal
Abstract:
Promotion is a key element in the retail business. Thus, analysis of promotions to quantify their effectiveness in terms of Revenue and/or Margin is an essential activity in the retail industry. However, measuring the sales/revenue uplift is based on estimations, as the actual sales/revenue without the promotion is not present. Further, the presence of Halo and Cannibalization in a multiple parallel promotions’ scenario complicates the problem. Calculating Baseline by considering inter-brand/competitor items or using Halo and Cannibalization's impact on Revenue calculations by considering Baseline as an interpretation of items’ unit sales in neighboring nonpromotional weeks individually may not capture the overall Revenue uplift in the case of multiple parallel promotions. Hence, this paper proposes a Machine Learning based method for calculating the Revenue uplift by considering the Halo and Cannibalization impact on the Baseline and the Revenue. In the first section of the proposed methodology, Baseline of an item is calculated by incorporating the impact of the promotions on its related items. In the later section, the Revenue of an item is calculated by considering both Halo and Cannibalization impacts. Hence, this methodology enables correct calculation of the overall Revenue uplift due a given promotion.
Keywords: Halo, cannibalization, promotion, baseline, temporary price reduction, retail, elasticity, cross price elasticity, machine learning, random forest, linear regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13241162 The Use of Artificial Neural Network in Option Pricing: The Case of S and P 100 Index Options
Authors: Zeynep İltüzer Samur, Gül Tekin Temur
Abstract:
Due to the increasing and varying risks that economic units face with, derivative instruments gain substantial importance, and trading volumes of derivatives have reached very significant level. Parallel with these high trading volumes, researchers have developed many different models. Some are parametric, some are nonparametric. In this study, the aim is to analyse the success of artificial neural network in pricing of options with S&P 100 index options data. Generally, the previous studies cover the data of European type call options. This study includes not only European call option but also American call and put options and European put options. Three data sets are used to perform three different ANN models. One only includes data that are directly observed from the economic environment, i.e. strike price, spot price, interest rate, maturity, type of the contract. The others include an extra input that is not an observable data but a parameter, i.e. volatility. With these detail data, the performance of ANN in put/call dimension, American/European dimension, moneyness dimension is analyzed and whether the contribution of the volatility in neural network analysis make improvement in prediction performance or not is examined. The most striking results revealed by the study is that ANN shows better performance when pricing call options compared to put options; and the use of volatility parameter as an input does not improve the performance.
Keywords: Option Pricing, Neural Network, S&P 100 Index, American/European options
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30841161 Utilization of Whey for the Production of β-Galactosidase Using Yeast and Fungal Culture
Authors: Rupinder Kaur, Parmjit S. Panesar, Ram S. Singh
Abstract:
Whey is the lactose rich by-product of the dairy industry, having good amount of nutrient reservoir. Most abundant nutrients are lactose, soluble proteins, lipids and mineral salts. Disposing of whey by most of milk plants which do not have proper pre-treatment system is the major issue. As a result of which, there can be significant loss of potential food and energy source. Thus, whey has been explored as the substrate for the synthesis of different value added products such as enzymes. β-galactosidase is one of the important enzymes and has become the major focus of research due to its ability to catalyze both hydrolytic as well as transgalactosylation reaction simultaneously. The enzyme is widely used in dairy industry as it catalyzes the transformation of lactose to glucose and galactose, making it suitable for the lactose intolerant people. The enzyme is intracellular in both bacteria and yeast, whereas for molds, it has an extracellular location. The present work was carried to utilize the whey for the production of β-galactosidase enzyme using both yeast and fungal cultures. The yeast isolate Kluyveromyces marxianus WIG2 and various fungal strains have been used in the present study. Different disruption techniques have also been investigated for the extraction of the enzyme produced intracellularly from yeast cells. Among the different methods tested for the disruption of yeast cells, SDS-chloroform showed the maximum β-galactosidase activity. In case of the tested fungal cultures, Aureobasidium pullulans NCIM 1050 was observed to be the maximum extracellular enzyme producer.Keywords: β-galactosidase, fungus, yeast, whey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55791160 A Framework for Product Development Process including HW and SW Components
Authors: Namchul Do, Gyeongseok Chae
Abstract:
This paper proposes a framework for product development including hardware and software components. It provides separation of hardware dependent software, modifications of current product development process, and integration of software modules with existing product configuration models and assembly product structures. In order to decide the dependent software, the framework considers product configuration modules and engineering changes of associated software and hardware components. In order to support efficient integration of the two different hardware and software development, a modified product development process is proposed. The process integrates the dependent software development into product development through the interchanges of specific product information. By using existing product data models in Product Data Management (PDM), the framework represents software as modules for product configurations and software parts for product structure. The framework is applied to development of a robot system in order to show its effectiveness.Keywords: HW and SW Development Integration, ProductDevelopment with Software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26011159 Oracle JDE Enterprise One ERP Implementation: A Case Study
Authors: Abhimanyu Pati, Krishna Kumar Veluri
Abstract:
The paper intends to bring out a real life experience encountered during actual implementation of a large scale Tier-1 Enterprise Resource Planning (ERP) system in a multi-location, discrete manufacturing organization in India, involved in manufacturing of auto components and aggregates. The business complexities, prior to the implementation of ERP, include multi-product with hierarchical product structures, geographically distributed multiple plant locations with disparate business practices, lack of inter-plant broadband connectivity, existence of disparate legacy applications for different business functions, and non-standardized codifications of products, machines, employees, and accounts apart from others. On the other hand, the manufacturing environment consisted of processes like Assemble-to-Order (ATO), Make-to-Stock (MTS), and Engineer-to-Order (ETO) with a mix of discrete and process operations. The paper has highlighted various business plan areas and concerns, prior to the implementation, with specific focus on strategic issues and objectives. Subsequently, it has dealt with the complete process of ERP implementation, starting from strategic planning, project planning, resource mobilization, and finally, the program execution. The step-by-step process provides a very good learning opportunity about the implementation methodology. At the end, various organizational challenges and lessons emerged, which will act as guidelines and checklist for organizations to successfully align and implement ERP and achieve their business objectives.
Keywords: ERP, ATO, MTS, ETO, discrete manufacturing, strategic planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800