Search results for: binary decision diagram
3150 Parameters Influencing Human Machine Interaction in Hospitals
Authors: Hind Bouami
Abstract:
Handling life-critical systems complexity requires to be equipped with appropriate technology and the right human agents’ functions such as knowledge, experience, and competence in problem’s prevention and solving. Human agents are involved in the management and control of human-machine system’s performance. Documenting human agent’s situation awareness is crucial to support human-machine designers’ decision-making. Knowledge about risks, critical parameters and factors that can impact and threaten automation system’s performance should be collected using preventive and retrospective approaches. This paper aims to document operators’ situation awareness through the analysis of automated organizations’ feedback. The analysis of automated hospital pharmacies feedbacks helps to identify and control critical parameters influencing human machine interaction in order to enhance system’s performance and security. Our human machine system evaluation approach has been deployed in Macon hospital center’s pharmacy which is equipped with automated drug dispensing systems since 2015. Automation’s specifications are related to technical aspects, human-machine interaction, and human aspects. The evaluation of drug delivery automation performance in Macon hospital center has shown that the performance of the automated activity depends on the performance of the automated solution chosen, and also on the control of systemic factors. In fact, 80.95% of automation specification related to the chosen Sinteco’s automated solution is met. The performance of the chosen automated solution is involved in 28.38% of automation specifications performance in Macon hospital center. The remaining systemic parameters involved in automation specifications performance need to be controlled.Keywords: life-critical systems, situation awareness, human-machine interaction, decision-making
Procedia PDF Downloads 1813149 Models to Calculate Lattice Spacing, Melting Point and Lattice Thermal Expansion of Ga₂Se₃ Nanoparticles
Authors: Mustafa Saeed Omar
Abstract:
The formula which contains the maximum increase of mean bond length, melting entropy and critical particle radius is used to calculate lattice volume in nanoscale size crystals of Ga₂Se₃. This compound belongs to the binary group of III₂VI₃. The critical radius is calculated from the values of the first surface atomic layer height which is equal to 0.336nm. The size-dependent mean bond length is calculated by using an equation-free from fitting parameters. The size-dependent lattice parameter then is accordingly used to calculate the size-dependent lattice volume. The lattice size in the nanoscale region increases to about 77.6 A³, which is up to four times of its bulk state value 19.97 A³. From the values of the nanosize scale dependence of lattice volume, the nanoscale size dependence of melting temperatures is calculated. The melting temperature decreases with the nanoparticles size reduction, it becomes zero when the radius reaches to its critical value. Bulk melting temperature for Ga₂Se₃, for example, has values of 1293 K. From the size-dependent melting temperature and mean bond length, the size-dependent lattice thermal expansion is calculated. Lattice thermal expansion decreases with the decrease of nanoparticles size and reaches to its minimum value as the radius drops down to about 5nm.Keywords: Ga₂Se₃, lattice volume, lattice thermal expansion, melting point, nanoparticles
Procedia PDF Downloads 1693148 Destination Decision Model for Cruising Taxis Based on Embedding Model
Authors: Kazuki Kamada, Haruka Yamashita
Abstract:
In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.Keywords: taxi industry, decision making, recommendation system, embedding model
Procedia PDF Downloads 1383147 The Development of an Agent-Based Model to Support a Science-Based Evacuation and Shelter-in-Place Planning Process within the United States
Authors: Kyle Burke Pfeiffer, Carmella Burdi, Karen Marsh
Abstract:
The evacuation and shelter-in-place planning process employed by most jurisdictions within the United States is not informed by a scientifically-derived framework that is inclusive of the behavioral and policy-related indicators of public compliance with evacuation orders. While a significant body of work exists to define these indicators, the research findings have not been well-integrated nor translated into useable planning factors for public safety officials. Additionally, refinement of the planning factors alone is insufficient to support science-based evacuation planning as the behavioral elements of evacuees—even with consideration of policy-related indicators—must be examined in the context of specific regional transportation and shelter networks. To address this problem, the Federal Emergency Management Agency and Argonne National Laboratory developed an agent-based model to support regional analysis of zone-based evacuation in southeastern Georgia. In particular, this model allows public safety officials to analyze the consequences that a range of hazards may have upon a community, assess evacuation and shelter-in-place decisions in the context of specified evacuation and response plans, and predict outcomes based on community compliance with orders and the capacity of the regional (to include extra-jurisdictional) transportation and shelter networks. The intention is to use this model to aid evacuation planning and decision-making. Applications for the model include developing a science-driven risk communication strategy and, ultimately, in the case of evacuation, the shortest possible travel distance and clearance times for evacuees within the regional boundary conditions.Keywords: agent-based modeling for evacuation, decision-support for evacuation planning, evacuation planning, human behavior in evacuation
Procedia PDF Downloads 2353146 Challenges of Implementing Participatory Irrigation Management for Food Security in Semi Arid Areas of Tanzania
Authors: Pilly Joseph Kagosi
Abstract:
The study aims at assessing challenges observed during the implementation of participatory irrigation management (PIM) approach for food security in semi-arid areas of Tanzania. Data were collected through questionnaire, PRA tools, key informants discussion, Focus Group Discussion (FGD), participant observation, and literature review. Data collected from the questionnaire was analysed using SPSS while PRA data was analysed with the help of local communities during PRA exercise. Data from other methods were analysed using content analysis. The study revealed that PIM approach has a contribution in improved food security at household level due to the involvement of communities in water management activities and decision making which enhanced the availability of water for irrigation and increased crop production. However, there were challenges observed during the implementation of the approach including; minimum participation of beneficiaries in decision-making during planning and designing stages, meaning inadequate devolution of power among scheme owners. Inadequate and lack of transparency on income expenditure in Water Utilization Associations’ (WUAs), water conflict among WUAs members, conflict between farmers and livestock keepers and conflict between WUAs leaders and village government regarding training opportunities and status; WUAs rules and regulation are not legally recognized by the National court and few farmers involved in planting trees around water sources. However, it was realized that some of the mentioned challenges were rectified by farmers themselves facilitated by government officials. The study recommends that the identified challenges need to be rectified for farmers to realize impotence of PIM approach as it was realized by other Asian countries.Keywords: challenges, participatory approach, irrigation management, food security, semi arid areas
Procedia PDF Downloads 3243145 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem
Authors: Newroz Nooralddin Abdulrazaq, Thuraya Mahmood Qaradaghi
Abstract:
The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.Keywords: McEliece cryptosystem, Goppa code, separable, irreducible
Procedia PDF Downloads 2663144 Maori Primary Industries Responses to Climate Change and Freshwater Policy Reforms in Aotearoa New Zealand
Authors: Tanira Kingi, Oscar Montes Oca, Reina Tamepo
Abstract:
The introduction of the Climate Change Response (Zero Carbon) Amendment Act (2019) and the National Policy Statement for Freshwater Management (2020) both contain underpinning statements that refer to the principles of the Treaty of Waitangi and cultural concepts of stewardship and environmental protection. Maori interests in New Zealand’s agricultural, forestry, fishing and horticultural sectors are significant. The organizations that manage these investments do so on behalf of extended family groups that hold inherited interests based on genealogical connections (whakapapa) to particular tribal units (iwi and hapu) and areas of land (whenua) and freshwater bodies (wai). This paper draws on the findings of current research programmes funded by the New Zealand Agricultural Greenhouse Gas Research Centre (NZAGRC) and the Our Land & Water National Science Challenge (OLW NSC) to understand the impact of cultural knowledge and imperatives on agricultural GHG and freshwater mitigation and land-use change decisions. In particular, the research outlines mitigation and land-use change scenario decision support frameworks that model changes in emissions profiles (reductions in biogenic methane, nitrous oxide and nutrient emissions to freshwater) of agricultural and forestry production systems along with impacts on key economic indicators and socio-cultural factors. The paper also assesses the effectiveness of newly introduced partnership arrangements between Maori groups/organizations and key government agencies on policy co-design and implementation, and in particular, decisions to adopt mitigation practices and to diversify land use.Keywords: co-design and implementation of environmental policy, indigenous environmental knowledge, Māori land tenure and agribusiness, mitigation and land use change decision support frameworks
Procedia PDF Downloads 2153143 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning
Procedia PDF Downloads 2973142 The Role of Metaphor in Communication
Authors: Fleura Shkëmbi, Valbona Treska
Abstract:
In elementary school, we discover that a metaphor is a decorative linguistic device just for poets. But now that we know, it's also a crucial tactic that individuals employ to understand the universe, from fundamental ideas like time and causation to the most pressing societal challenges today. Metaphor is the use of language to refer to something other than what it was originally intended for or what it "literally" means in order to suggest a similarity or establish a connection between the two. People do not identify metaphors as relevant in their decisions, according to a study on metaphor and its effect on decision-making; instead, they refer to more "substantive" (typically numerical) facts as the basis for their problem-solving decision. Every day, metaphors saturate our lives via language, cognition, and action. They argue that our conceptions shape our views and interactions with others and that concepts define our reality. Metaphor is thus a highly helpful tool for both describing our experiences to others and forming notions for ourselves. In therapeutic contexts, their shared goal appears to be twofold. The cognitivist approach to metaphor regards it as one of the fundamental foundations of human communication. The benefits and disadvantages of utilizing the metaphor differ depending on the target domain that the metaphor portrays. The challenge of creating messages and surroundings that affect customers' notions of abstract ideas in a variety of industries, including health, hospitality, romance, and money, has been studied for decades in marketing and consumer psychology. The aim of this study is to examine, through a systematic literature review, the role of the metaphor in communication and in advertising. This study offers a selected analysis of this literature, concentrating on research on customer attitudes and product appraisal. The analysis of the data identifies potential research questions. With theoretical and applied implications for marketing, design, and persuasion, this study sheds light on how, when, and for whom metaphoric communications are powerful.Keywords: metaphor, communication, advertising, cognition, action
Procedia PDF Downloads 993141 Balanced Scorecard (BSC) Project : A Methodological Proposal for Decision Support in a Corporate Scenario
Authors: David de Oliveira Costa, Miguel Ângelo Lellis Moreira, Carlos Francisco Simões Gomes, Daniel Augusto de Moura Pereira, Marcos dos Santos
Abstract:
Strategic management is a fundamental process for global companies that intend to remain competitive in an increasingly dynamic and complex market. To do so, it is necessary to maintain alignment with their principles and values. The Balanced Scorecard (BSC) proposes to ensure that the overall business performance is based on different perspectives (financial, customer, internal processes, and learning and growth). However, relying solely on the BSC may not be enough to ensure the success of strategic management. It is essential that companies also evaluate and prioritize strategic projects that need to be implemented to ensure they are aligned with the business vision and contribute to achieving established goals and objectives. In this context, the proposition involves the incorporation of the SAPEVO-M multicriteria method to indicate the degree of relevance between different perspectives. Thus, the strategic objectives linked to these perspectives have greater weight in the classification of structural projects. Additionally, it is proposed to apply the concept of the Impact & Probability Matrix (I&PM) to structure and ensure that strategic projects are evaluated according to their relevance and impact on the business. By structuring the business's strategic management in this way, alignment and prioritization of projects and actions related to strategic planning are ensured. This ensures that resources are directed towards the most relevant and impactful initiatives. Therefore, the objective of this article is to present the proposal for integrating the BSC methodology, the SAPEVO-M multicriteria method, and the prioritization matrix to establish a concrete weighting of strategic planning and obtain coherence in defining strategic projects aligned with the business vision. This ensures a robust decision-making support process.Keywords: MCDA process, prioritization problematic, corporate strategy, multicriteria method
Procedia PDF Downloads 813140 Multiple Version of Roman Domination in Graphs
Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera
Abstract:
In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values
Procedia PDF Downloads 1083139 Association Between Hip Internal and External Rotation Range of Motion and Low Back Pain in Table Tennis Players
Authors: Kaili Wang, Botao Zhang, Enming Zhang
Abstract:
Background: Low back pain (LBP) is a common problem affecting athletes' training and competition. Although the association between a limited hip range of motion and prevalence of low back pain has been studied extensively, it has not been studied in table tennis. Aim: The main purposes of this study in table tennis players were (1) to investigate if there is a difference in hip internal rotation (HIR) and external rotation (HER) range of motion (ROM) between players with LBP and players without LBP and (2) to analyze the association between HIR and HER ROM and LBP. Methods: Forty-six table tennis players from the Chinese table tennis team were evaluated for passive maximum HIR and HER ROM. LBP was retrospectively recorded for the last 12 months before the date of ROM assessment by a physical therapist. The data were analyzed the difference in HIR and HER ROM between players with LBP and players without LBP by Mann-Whitney U test, and the association between the difference in HIR and HER ROM and LBP was analyzed via a binary logistic regression. Results: The 54% of players had developed LBP during the retrospective study period. Significant difference between LBP group and the asymptomatic group for HIR ROM (z=4.007, p<0.001) was observed. Difference between LBP group and asymptomatic group for HER ROM (z=1.117, p=0.264) was not significant. Players who had HIR ROM deficit had an increased risk of LBP compared with players without HIR ROM deficit (OR=5.344, 95%CI: 1.006-28.395, P=0.049). Conclusion: HIR ROM of a table tennis player with LBP was less than a table tennis player without LBP. Compared with player whose HIR ROM was normal, player who had HIR ROM deficit appeared to have a higher risk for LBP.Keywords: assessment, injury prevention, low back pain, table tennis players
Procedia PDF Downloads 1113138 Examining Motivational Strategies of Foreign Manufacturing Firms in Ghana
Authors: Samuel Ato Dadzie
Abstract:
The objective of this study is to examine the influence of eclectic paradigm on motivational strategy of foreign subsidiaries in Ghana. This study uses binary regression model, and the analysis was based on 75 manufacturing investments made by MNEs from different countries in 1994–2008. The results indicated that perceived market size increases the probability of foreign firms undertaking a market seeking (MS) in Ghana, while perceived cultural distance between Ghana and foreign firm’s home countries decreased the probability of foreign firms undertaking an market seeking (MS) foreign direct investment (FDI) in Ghana. Furthermore, extensive international experience decreases the probability of foreign firms undertaking a market seeking (MS) foreign direct investment (FDI) in Ghana. Most of the studies done by earlier researchers were based on the advanced and emerging countries and offered support for the theory, which was used in generalizing the result that multinational corporations (MNCs) normally used the theory regarding investment strategy outside their home country. In using the same theory in the context of Ghana, the result does not offer strong support for the theory. This means that MNCs that come to Sub-Sahara Africa cannot rely much on eclectic paradigm for their motivational strategies because prevailing economic conditions in Ghana are different from that of the advanced and emerging economies where the institutional structures work.Keywords: foreign subsidiary, motives, Ghana, foreign direct investment
Procedia PDF Downloads 4333137 J-Integral Method for Assessment of Structural Integrity of a Pressure Vessel
Authors: Karthik K. R, Viswanath V, Asraff A. K
Abstract:
The first stage of a new-generation launch vehicle of ISRO makes use of large pressure vessels made of Aluminium alloy AA2219 to store fuel and oxidizer. These vessels have many weld joints that may contain cracks or crack-like defects during their fabrication. These defects may propagate across the vessel during pressure testing or while in service under the influence of tensile stresses leading to catastrophe. Though ductile materials exhibit significant stable crack growth prior to failure, it is not generally acceptable for an aerospace component. There is a need to predict the initiation of stable crack growth. The structural integrity of the vessel from fracture considerations can be studied by constructing the Failure Assessment Diagram (FAD) that accounts for both brittle fracture and plastic collapse. Critical crack sizes of the pressure vessel may be highly conservative if it is predicted from FAD alone. If the J-R curve for material under consideration is available apriori, the critical crack sizes can be predicted to a certain degree of accuracy. In this paper, a novel approach is proposed to predict the integrity of a weld in a pressure vessel made of AA2219 material. Fracture parameter ‘J-integral’ at the crack front, evaluated through finite element analyses, is used in the new procedure. Based on the simulation of tension tests carried out on SCT specimens by NASA, a cut-off value of J-integral value (J?ᵤₜ_ₒ??) is finalised. For the pressure vessel, J-integral at the crack front is evaluated through FE simulations incorporating different surface cracks at long seam weld in a cylinder and in dome petal welds. The obtained J-integral, at vessel level, is compared with a value of J?ᵤₜ_ₒ??, and the integrity of vessel weld in the presence of the surface crack is firmed up. The advantage of this methodology is that if SCT test data of any metal is available, the critical crack size in hardware fabricated using that material can be predicted to a better level of accuracy.Keywords: FAD, j-integral, fracture, surface crack
Procedia PDF Downloads 1873136 Urban Growth and Its Impact on Natural Environment: A Geospatial Analysis of North Part of the UAE
Authors: Mohamed Bualhamam
Abstract:
Due to the complex nature of tourism resources of the Northern part of the United Arab Emirates (UAE), the potential of Geographical Information Systems (GIS) and Remote Sensing (RS) in resolving these issues was used. The study was an attempt to use existing GIS data layers to identify sensitive natural environment and archaeological heritage resources that may be threatened by increased urban growth and give some specific recommendations to protect the area. By identifying sensitive natural environment and archaeological heritage resources, public agencies and citizens are in a better position to successfully protect important natural lands and direct growth away from environmentally sensitive areas. The paper concludes that applications of GIS and RS in study of urban growth impact in tourism resources are a strong and effective tool that can aid in tourism planning and decision-making. The study area is one of the fastest growing regions in the country. The increase in population along the region, as well as rapid growth of towns, has increased the threat to natural resources and archeological sites. Satellite remote sensing data have been proven useful in assessing the natural resources and in monitoring the changes. The study used GIS and RS to identify sensitive natural environment and archaeological heritage resources that may be threatened by increased urban growth. The result of GIS analyses shows that the Northern part of the UAE has variety for tourism resources, which can use for future tourism development. Rapid urban development in the form of small towns and different economic activities are showing in different places in the study area. The urban development extended out of old towns and have negative affected of sensitive tourism resources in some areas. Tourism resources for the Northern part of the UAE is a highly complex resources, and thus requires tools that aid in effective decision making to come to terms with the competing economic, social, and environmental demands of sustainable development. The UAE government should prepare a tourism databases and a GIS system, so that planners can be accessed for archaeological heritage information as part of development planning processes. Applications of GIS in urban planning, tourism and recreation planning illustrate that GIS is a strong and effective tool that can aid in tourism planning and decision- making. The power of GIS lies not only in the ability to visualize spatial relationships, but also beyond the space to a holistic view of the world with its many interconnected components and complex relationships. The worst of the damage could have been avoided by recognizing suitable limits and adhering to some simple environmental guidelines and standards will successfully develop tourism in sustainable manner. By identifying sensitive natural environment and archaeological heritage resources of the Northern part of the UAE, public agencies and private citizens are in a better position to successfully protect important natural lands and direct growth away from environmentally sensitive areas.Keywords: GIS, natural environment, UAE, urban growth
Procedia PDF Downloads 2623135 The Adoption of Leagility in Healthcare Services
Authors: Ana L. Martins, Luis Orfão
Abstract:
Healthcare systems have been subject to various research efforts aiming at process improvement under a lean approach. Another perspective, agility, has also been used, though in a lower scale, in order to analyse the ability of different hospital services to adapt to demand uncertainties. Both perspectives have a common denominator, the improvement of effectiveness and efficiency of the services in a healthcare setting context. Mixing the two approached allows, on one hand, to streamline the processes, and on the other hand the required flexibility to deal with demand uncertainty in terms of both volume and variety. The present research aims to analyse the impacts of the combination of both perspectives in the effectiveness and efficiency of an hospital service. The adopted methodology is based on a case study approach applied to the process of the ambulatory surgery service of Hospital de Lamego. Data was collected from direct observations, formal interviews and informal conversations. The analyzed process was selected according to three criteria: relevance of the process to the hospital, presence of human resources, and presence of waste. The customer of the process was identified as well as his perception of value. The process was mapped using flow chart, on a process modeling perspective, as well as through the use of Value Stream Mapping (VSM) and Process Activity Mapping. The Spaghetti Diagram was also used to assess flow intensity. The use of the lean tools enabled the identification of three main types of waste: movement, resource inefficiencies and process inefficiencies. From the use of the lean tools improvement suggestions were produced. The results point out that leagility cannot be applied to the process, but the application of lean and agility in specific areas of the process would bring benefits in both efficiency and effectiveness, and contribute to value creation if improvements are introduced in hospital’s human resources and facilities management.Keywords: case study, healthcare systems, leagility, lean management
Procedia PDF Downloads 2003134 Agent-Based Modelling to Improve Dairy-origin Beef Production: Model Description and Evaluation
Authors: Addisu H. Addis, Hugh T. Blair, Paul R. Kenyon, Stephen T. Morris, Nicola M. Schreurs, Dorian J. Garrick
Abstract:
Agent-based modeling (ABM) enables an in silico representation of complex systems and cap-tures agent behavior resulting from interaction with other agents and their environment. This study developed an ABM to represent a pasture-based beef cattle finishing systems in New Zea-land (NZ) using attributes of the rearer, finisher, and processor, as well as specific attributes of dairy-origin beef cattle. The model was parameterized using values representing 1% of NZ dairy-origin cattle, and 10% of rearers and finishers in NZ. The cattle agent consisted of 32% Holstein-Friesian, 50% Holstein-Friesian–Jersey crossbred, and 8% Jersey, with the remainder being other breeds. Rearers and finishers repetitively and simultaneously interacted to determine the type and number of cattle populating the finishing system. Rearers brought in four-day-old spring-born calves and reared them until 60 calves (representing a full truck load) on average had a live weight of 100 kg before selling them on to finishers. Finishers mainly attained weaners from rearers, or directly from dairy farmers when weaner demand was higher than the supply from rearers. Fast-growing cattle were sent for slaughter before the second winter, and the re-mainder were sent before their third winter. The model finished a higher number of bulls than heifers and steers, although it was 4% lower than the industry reported value. Holstein-Friesian and Holstein-Friesian–Jersey-crossbred cattle dominated the dairy-origin beef finishing system. Jersey cattle account for less than 5% of total processed beef cattle. Further studies to include re-tailer and consumer perspectives and other decision alternatives for finishing farms would im-prove the applicability of the model for decision-making processes.Keywords: agent-based modelling, dairy cattle, beef finishing, rearers, finishers
Procedia PDF Downloads 993133 From Pink to Ink: Understanding the Decision-Making Process of Post-mastectomy Women Who Have Covered Their Scars with Decorative Tattoos
Authors: Fernanda Rodriguez
Abstract:
Breast cancer is pervasive among women, and an increasing number of women are opting for a mastectomy: a medical operation in which one or both breasts are removed with the intention of treating or averting breast cancer. However, there is an emerging population of cancer survivors in European nations that, rather than attempting to reconstruct their breasts to resemble as much as possible ‘normal’ breasts, have turned to dress their scars with decorative tattoos. At a practical level, this study hopes to improve the support systems of these women by possibly providing professionals in the medical field, tattoo artists, and family members of cancer survivors with a deeper understanding of their motivations and decision-making processes for choosing an alternative restorative route - such as decorative tattoos - after their mastectomy. At an intellectual level, however, this study aims to narrow a gap in the academic field concerning the relationship between mastectomies and alternative methods of healing, such as decorative tattoos, as well as to broaden the understanding regarding meaning-making and the ‘normal’ feminine body. Thus, by means of semi-structured interviews and a phenomenological standpoint, this research set itself the goal to understand why do women who have undergone a mastectomy choose to dress their scars with decorative tattoos instead of attempting to regain ‘normalcy’ through breast reconstruction or 3D areola tattoos? The results obtained from the interviews with fifteen women showed that the disillusionment with one part of the other of breast restoration techniques had led these women to find an alternative form of healing that allows them not only to close a painful chapter of their life but also to regain control over their bodies after a period of time in which agency was taking away from them. Decorative post-mastectomy tattoos allow these women to grant their bodies with new meanings and produce their own interpretation of their feminine body and identity.Keywords: alternative femininity, decorative mastectomy tattoos, gender embodiment, social stigmatization
Procedia PDF Downloads 1203132 Application of Environmental Justice Concept in Urban Planning, The Peri-Urban Environment of Tehran as the Case Study
Authors: Zahra Khodaee
Abstract:
Environmental Justice (EJ) concept consists of multifaceted movements, community struggles, and discourses in contemporary societies that seek to reduce environmental risks, increase environmental protections, and generally reduce environmental inequalities suffered by minority and poor communities; a term that incorporates ‘environmental racism’ and ‘environmental classism,’ captures the idea that different racial and socioeconomic groups experience differential access to environmental quality. This article explores environmental justice as an urban phenomenon in urban planning and applies it in peri-urban environment of a metropolis. Tehran peri-urban environments which are the result of meeting the city- village- nature systems or «city-village junction» have gradually faced effects such as accelerated environmental decline, changes without land-use plan, and severe service deficiencies. These problems are instances of environmental injustice which make the planners to adjust the problems and use and apply the appropriate strategies and policies by looking for solutions and resorting to theories, techniques and methods related to environmental justice. In order to access to this goal, try to define environmental justice through justice and determining environmental justice indices to analysis environmental injustice in case study. Then, make an effort to introduce some criteria to select case study in two micro and micro levels. Qiyamdasht town as the peri-urban environment of Tehran metropolis is chosen and examined to show the existence of environmental injustice by questionnaire analysis and SPSS software. Finally, use AIDA technique to design a strategic plan and reduce environmental injustice in case study by introducing the better scenario to be used in policy and decision making areas.Keywords: environmental justice, metropolis of Tehran, Qiyam, Dasht peri, urban settlement, analysis of interconnected decision area (AIDA)
Procedia PDF Downloads 4913131 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 2093130 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques
Authors: Chandu Rathnayake, Isuri Anuradha
Abstract:
Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.Keywords: CNN, random forest, decision tree, machine learning, deep learning
Procedia PDF Downloads 733129 Simultaneous Targeting of MYD88 and Nur77 as an Effective Approach for the Treatment of Inflammatory Diseases
Authors: Uzma Saqib, Mirza S. Baig
Abstract:
Myeloid differentiation primary response protein 88 (MYD88) has long been considered a central player in the inflammatory pathway. Recent studies clearly suggest that it is an important therapeutic target in inflammation. On the other hand, a recent study on the interaction between the orphan nuclear receptor (Nur77) and p38α, leading to increased lipopolysaccharide-induced hyperinflammatory response, suggests this binary complex as a therapeutic target. In this study, we have designed inhibitors that can inhibit both MYD88 and Nur77 at the same time. Since both MYD88 and Nur77 are an integral part of the pathways involving lipopolysaccharide-induced activation of NF-κB-mediated inflammation, we tried to target both proteins with the same library in order to retrieve compounds having dual inhibitory properties. To perform this, we developed a homodimeric model of MYD88 and, along with the crystal structure of Nur77, screened a virtual library of compounds from the traditional Chinese medicine database containing ~61,000 compounds. We analyzed the resulting hits for their efficacy for dual binding and probed them for developing a common pharmacophore model that could be used as a prototype to screen compound libraries as well as to guide combinatorial library design to search for ideal dual-target inhibitors. Thus, our study explores the identification of novel leads having dual inhibiting effects due to binding to both MYD88 and Nur77 targets.Keywords: drug design, Nur77, MYD88, inflammation
Procedia PDF Downloads 3053128 Studies on Interaction between Anionic Polymer Sodium Carboxymethylcellulose with Cationic Gemini Surfactants
Authors: M. Kamil, Rahber Husain Khan
Abstract:
In the present study, the Interaction of anionic polymer, sodium carboxymethylcellulose (NaCMC), with cationic gemini surfactants 2,2[(oxybis(ethane-1,2-diyl))bis(oxy)]bis(N-hexadecyl1-N,N-[di(E2)/tri(E3)]methyl1-2-oxoethanaminium)chloride (16-E2-16 and 16-E3-16) and conventional surfactant (CTAC) in aqueous solutions have been studied by surface tension measurement of binary mixtures (0.0- 0.5 wt% NaCMC and 1 mM gemini surfactant/10 mM CTAC solution). Surface tension measurements were used to determine critical aggregation concentration (CAC) and critical micelle concentration (CMC). The maximum surface excess concentration (Ґmax) at the air-water interface was evaluated by the Gibbs adsorption equation. The minimum area per surfactant molecule was evaluated, which indicates the surfactant-polymer Interaction in a mixed system. The effect of changing surfactant chain length on CAC and CMC values of mixed polymer-surfactant systems was examined. From the results, it was found that the gemini surfactant interacts strongly with NaCMC as compared to its corresponding monomeric counterpart CTAC. In these systems, electrostatic interactions predominate. The lowering of surface tension with an increase in the concentration of surfactants is higher in the case of gemini surfactants almost 10-15 times. The measurements indicated that the Interaction between NaCMC-CTAC resulted in complex formation. The volume of coacervate increases with an increase in CTAC concentration; however, above 0.1 wt. % concentration coacervate vanishes.Keywords: anionic polymer, gemni surfactants, tensiometer, CMC, interaction
Procedia PDF Downloads 893127 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components
Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea
Abstract:
Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.Keywords: assessment, part of speech, sentiment analysis, student feedback
Procedia PDF Downloads 1423126 Diagnostics and Explanation of the Current Status of the 40- Year Railway Viaduct
Authors: Jakub Zembrzuski, Bartosz Sobczyk, Mikołaj MIśkiewicz
Abstract:
Besides designing new constructions, engineers all over the world must face another problem – maintenance, repairs, and assessment of the technical condition of existing bridges. To solve more complex issues, it is necessary to be familiar with the theory of finite element method and to have access to the software that provides sufficient tools which to enable create of sometimes significantly advanced numerical models. The paper includes a brief assessment of the technical condition, a description of the in situ non-destructive testing carried out and the FEM models created for global and local analysis. In situ testing was performed using strain gauges and displacement sensors. Numerical models were created using various software and numerical modeling techniques. Particularly noteworthy is the method of modeling riveted joints of the crossbeam of the viaduct. It is a simplified method that consists of the use of only basic numerical tools such as beam and shell finite elements, constraints, and simplified boundary conditions (fixed support and symmetry). The results of the numerical analyses were presented and discussed. It is clearly explained why the structure did not fail, despite the fact that the weld of the deck plate completely failed. A further research problem that was solved was to determine the cause of the rapid increase in values on the stress diagram in the cross-section of the transverse section. The problems were solved using the solely mentioned, simplified method of modeling riveted joints, which demonstrates that it is possible to solve such problems without access to sophisticated software that enables to performance of the advanced nonlinear analysis. Moreover, the obtained results are of great importance in the field of assessing the operation of bridge structures with an orthotropic plate.Keywords: bridge, diagnostics, FEM simulations, failure, NDT, in situ testing
Procedia PDF Downloads 733125 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 183124 Easy Way of Optimal Process-Storage Network Design
Authors: Gyeongbeom Yi
Abstract:
The purpose of this study is to introduce the analytic solution for determining the optimal capacity (lot-size) of a multiproduct, multistage production and inventory system to meet the finished product demand. Reasonable decision-making about the capacity of processes and storage units is an important subject for industry. The industrial solution for this subject is to use the classical economic lot sizing method, EOQ/EPQ (Economic Order Quantity/Economic Production Quantity) model, incorporated with practical experience. However, the unrealistic material flow assumption of the EOQ/EPQ model is not suitable for chemical plant design with highly interlinked processes and storage units. This study overcomes the limitation of the classical lot sizing method developed on the basis of the single product and single stage assumption. The superstructure of the plant considered consists of a network of serially and/or parallelly interlinked processes and storage units. The processes involve chemical reactions with multiple feedstock materials and multiple products as well as mixing, splitting or transportation of materials. The objective function for optimization is minimizing the total cost composed of setup and inventory holding costs as well as the capital costs of constructing processes and storage units. A novel production and inventory analysis method, PSW (Periodic Square Wave) model, is applied. The advantage of the PSW model comes from the fact that the model provides a set of simple analytic solutions in spite of a realistic description of the material flow between processes and storage units. The resulting simple analytic solution can greatly enhance the proper and quick investment decision for plant design and operation problem confronted in diverse economic situations.Keywords: analytic solution, optimal design, process-storage network
Procedia PDF Downloads 3313123 Continuum of Maternal Care in Non Empowered Action Group States of India: Evidence from District Level Household Survey-IV
Authors: Rasikha Ramanand, Priyanka Dixit
Abstract:
Background: Continuum of maternal care which includes antenatal care, delivery care and postnatal care aids in averting maternal deaths. The objective of this paper is to identify the association between previous experiences of child death on Continuum of Care (CoC) of recent child. Further, the study aimed at understanding where the drop-out rate was high in the continuum. Methods: The study was based on the Nation-wide District Level Household and Facility Survey (DLHS-4) conducted during 2012-13, which provides information on antenatal care, delivery care, percentage of women who received JSY benefits, percentage of women who had any pregnancy, delivery, the place of delivery etc. The sample included women who were selected from the non-EAG states who delivered at least two children. The data were analyzed using SPSS 20.Binary Logistic regression was applied to the data in which the Continuum of Care (CoC) was the dependent variable while the independent variables were entered as the covariates. Results: A major finding of the study was the antenatal to delivery care period where the drop-out rates were high. Also, it was found that a large proportion of women did not receive any of the services along the continuum. Conclusions: This study has clearly established the relationship between previous history of child loss and continuum of maternal care.Keywords: antenatal care, continuum of care, child loss, delivery care, India, maternal health care, postnatal care
Procedia PDF Downloads 4033122 Locating Potential Site for Biomass Power Plant Development in Central Luzon Philippines Using GIS-Based Suitability Analysis
Authors: Bryan M. Baltazar, Marjorie V. Remolador, Klathea H. Sevilla, Imee Saladaga, Loureal Camille Inocencio, Ma. Rosario Concepcion O. Ang
Abstract:
Biomass energy is a traditional source of sustainable energy, which has been widely used in developing countries. The Philippines, specifically Central Luzon, has an abundant source of biomass. Hence, it could supply abundant agricultural residues (rice husks), as feedstock in a biomass power plant. However, locating a potential site for biomass development is a complex process which involves different factors, such as physical, environmental, socio-economic, and risks that are usually diverse and conflicting. Moreover, biomass distribution is highly dispersed geographically. Thus, this study develops an integrated method combining Geographical Information Systems (GIS) and methods for energy planning; Multi-Criteria Decision Analysis (MCDA) and Analytical Hierarchy Process (AHP), for locating suitable site for biomass power plant development in Central Luzon, Philippines by considering different constraints and factors. Using MCDA, a three level hierarchy of factors and constraints was produced, with corresponding weights determined by experts by using AHP. Applying the results, a suitability map for Biomass power plant development in Central Luzon was generated. It showed that the central part of the region has the highest potential for biomass power plant development. It is because of the characteristics of the area such as the abundance of rice fields, with generally flat land surfaces, accessible roads and grid networks, and low risks to flooding and landslide. This study recommends the use of higher accuracy resource maps, and further analysis in selecting the optimum site for biomass power plant development that would account for the cost and transportation of biomass residues.Keywords: analytic hierarchy process, biomass energy, GIS, multi-criteria decision analysis, site suitability analysis
Procedia PDF Downloads 4273121 City-Wide Simulation on the Effects of Optimal Appliance Scheduling in a Time-of-Use Residential Environment
Authors: Rudolph Carl Barrientos, Juwaln Diego Descallar, Rainer James Palmiano
Abstract:
Household Appliance Scheduling Systems (HASS) coupled with a Time-of-Use (TOU) pricing scheme, a form of Demand Side Management (DSM), is not widely utilized in the Philippines’ residential electricity sector. This paper’s goal is to encourage distribution utilities (DUs) to adopt HASS and TOU by analyzing the effect of household schedulers on the electricity price and load profile in a residential environment. To establish this, a city based on an implemented survey is generated using Monte Carlo Analysis (MCA). Then, a Binary Particle Swarm Optimization (BPSO) algorithm-based HASS is developed considering user satisfaction, electricity budget, appliance prioritization, energy storage systems, solar power, and electric vehicles. The simulations were assessed under varying levels of user compliance. Results showed that the average electricity cost, peak demand, and peak-to-average ratio (PAR) of the city load profile were all reduced. Therefore, the deployment of the HASS and TOU pricing scheme is beneficial for both stakeholders.Keywords: appliance scheduling, DSM, TOU, BPSO, city-wide simulation, electric vehicle, appliance prioritization, energy storage system, solar power
Procedia PDF Downloads 99