Search results for: ideal decision
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4798

Search results for: ideal decision

3388 Application of VE in Healthcare Services: An Overview of Healthcare Facility

Authors: Safeer Ahmad, Pratheek Sudhakran, M. Arif Kamal, Tarique Anwar

Abstract:

In Healthcare facility designing, Efficient MEP services are very crucial because the built environment not only affects patients and family but also Healthcare staff and their outcomes. This paper shall cover the basics of Value engineering and its different phases that can be implemented to the MEP Designing stage for Healthcare facility optimization, also VE can improve the product cost the unnecessary costs associated with healthcare services. This paper explores Healthcare facility services and their Value engineering Job plan for the successful application of the VE technique by conducting a Workshop with end-users, designing team and associate experts shall be carried out using certain concepts, tools, methods and mechanism developed to achieve the purpose of selecting what is actually appropriate and ideal among many value engineering processes and tools that have long proven their ability to enhance the value by following the concept of Total quality management while achieving the most efficient resources allocation to satisfy the key functions and requirements of the project without sacrificing the targeted level of service for all design metrics. Detail study has been discussed with analysis been carried out by this process to achieve a better outcome, Various tools are used for the Analysis of the product at different phases used, at the end the results obtained after implementation of techniques are discussed.

Keywords: value engineering, healthcare facility, design, services

Procedia PDF Downloads 190
3387 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping

Authors: Andre Slonopas, Zona Kostic, Warren Thompson

Abstract:

Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.

Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory

Procedia PDF Downloads 182
3386 Parameters Influencing Human Machine Interaction in Hospitals

Authors: Hind Bouami

Abstract:

Handling life-critical systems complexity requires to be equipped with appropriate technology and the right human agents’ functions such as knowledge, experience, and competence in problem’s prevention and solving. Human agents are involved in the management and control of human-machine system’s performance. Documenting human agent’s situation awareness is crucial to support human-machine designers’ decision-making. Knowledge about risks, critical parameters and factors that can impact and threaten automation system’s performance should be collected using preventive and retrospective approaches. This paper aims to document operators’ situation awareness through the analysis of automated organizations’ feedback. The analysis of automated hospital pharmacies feedbacks helps to identify and control critical parameters influencing human machine interaction in order to enhance system’s performance and security. Our human machine system evaluation approach has been deployed in Macon hospital center’s pharmacy which is equipped with automated drug dispensing systems since 2015. Automation’s specifications are related to technical aspects, human-machine interaction, and human aspects. The evaluation of drug delivery automation performance in Macon hospital center has shown that the performance of the automated activity depends on the performance of the automated solution chosen, and also on the control of systemic factors. In fact, 80.95% of automation specification related to the chosen Sinteco’s automated solution is met. The performance of the chosen automated solution is involved in 28.38% of automation specifications performance in Macon hospital center. The remaining systemic parameters involved in automation specifications performance need to be controlled.

Keywords: life-critical systems, situation awareness, human-machine interaction, decision-making

Procedia PDF Downloads 179
3385 Destination Decision Model for Cruising Taxis Based on Embedding Model

Authors: Kazuki Kamada, Haruka Yamashita

Abstract:

In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.

Keywords: taxi industry, decision making, recommendation system, embedding model

Procedia PDF Downloads 137
3384 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 118
3383 The Effects of Transformational Leadership on Process Innovation through Knowledge Sharing

Authors: Sawsan J. Al-Husseini, Talib A. Dosa

Abstract:

Transformational leadership has been identified as the most important factor affecting innovation and knowledge sharing; it leads to increased goal-directed behavior exhibited by followers and thus to enhanced performance and innovation for the organization. However, there is a lack of models linking transformational leadership, knowledge sharing, and process innovation within higher education (HE) institutions in general within developing countries, particularly in Iraq. This research aims to examine the mediating role of knowledge sharing in the transformational leadership and process innovation relationship. A quantitative approach was taken and 254 usable questionnaires were collected from public HE institutions in Iraq. Structural equation modelling with AMOS 22 was used to analyze the causal relationships among factors. The research found that knowledge sharing plays a pivotal role in the relationship between transformational leadership and process innovation, and that transformational leadership would be ideal in an educational context, promoting knowledge sharing activities and influencing process innovation in the public HE in Iraq. The research has developed some guidelines for researchers as well as leaders and provided evidence to support the use of TL to increase process innovation within HE environment in developing countries, particularly in Iraq.

Keywords: transformational leadership, knowledge sharing, process innovation, structural equation modelling, developing countries

Procedia PDF Downloads 331
3382 The Development of an Agent-Based Model to Support a Science-Based Evacuation and Shelter-in-Place Planning Process within the United States

Authors: Kyle Burke Pfeiffer, Carmella Burdi, Karen Marsh

Abstract:

The evacuation and shelter-in-place planning process employed by most jurisdictions within the United States is not informed by a scientifically-derived framework that is inclusive of the behavioral and policy-related indicators of public compliance with evacuation orders. While a significant body of work exists to define these indicators, the research findings have not been well-integrated nor translated into useable planning factors for public safety officials. Additionally, refinement of the planning factors alone is insufficient to support science-based evacuation planning as the behavioral elements of evacuees—even with consideration of policy-related indicators—must be examined in the context of specific regional transportation and shelter networks. To address this problem, the Federal Emergency Management Agency and Argonne National Laboratory developed an agent-based model to support regional analysis of zone-based evacuation in southeastern Georgia. In particular, this model allows public safety officials to analyze the consequences that a range of hazards may have upon a community, assess evacuation and shelter-in-place decisions in the context of specified evacuation and response plans, and predict outcomes based on community compliance with orders and the capacity of the regional (to include extra-jurisdictional) transportation and shelter networks. The intention is to use this model to aid evacuation planning and decision-making. Applications for the model include developing a science-driven risk communication strategy and, ultimately, in the case of evacuation, the shortest possible travel distance and clearance times for evacuees within the regional boundary conditions.

Keywords: agent-based modeling for evacuation, decision-support for evacuation planning, evacuation planning, human behavior in evacuation

Procedia PDF Downloads 228
3381 Challenges of Implementing Participatory Irrigation Management for Food Security in Semi Arid Areas of Tanzania

Authors: Pilly Joseph Kagosi

Abstract:

The study aims at assessing challenges observed during the implementation of participatory irrigation management (PIM) approach for food security in semi-arid areas of Tanzania. Data were collected through questionnaire, PRA tools, key informants discussion, Focus Group Discussion (FGD), participant observation, and literature review. Data collected from the questionnaire was analysed using SPSS while PRA data was analysed with the help of local communities during PRA exercise. Data from other methods were analysed using content analysis. The study revealed that PIM approach has a contribution in improved food security at household level due to the involvement of communities in water management activities and decision making which enhanced the availability of water for irrigation and increased crop production. However, there were challenges observed during the implementation of the approach including; minimum participation of beneficiaries in decision-making during planning and designing stages, meaning inadequate devolution of power among scheme owners. Inadequate and lack of transparency on income expenditure in Water Utilization Associations’ (WUAs), water conflict among WUAs members, conflict between farmers and livestock keepers and conflict between WUAs leaders and village government regarding training opportunities and status; WUAs rules and regulation are not legally recognized by the National court and few farmers involved in planting trees around water sources. However, it was realized that some of the mentioned challenges were rectified by farmers themselves facilitated by government officials. The study recommends that the identified challenges need to be rectified for farmers to realize impotence of PIM approach as it was realized by other Asian countries.

Keywords: challenges, participatory approach, irrigation management, food security, semi arid areas

Procedia PDF Downloads 322
3380 Maori Primary Industries Responses to Climate Change and Freshwater Policy Reforms in Aotearoa New Zealand

Authors: Tanira Kingi, Oscar Montes Oca, Reina Tamepo

Abstract:

The introduction of the Climate Change Response (Zero Carbon) Amendment Act (2019) and the National Policy Statement for Freshwater Management (2020) both contain underpinning statements that refer to the principles of the Treaty of Waitangi and cultural concepts of stewardship and environmental protection. Maori interests in New Zealand’s agricultural, forestry, fishing and horticultural sectors are significant. The organizations that manage these investments do so on behalf of extended family groups that hold inherited interests based on genealogical connections (whakapapa) to particular tribal units (iwi and hapu) and areas of land (whenua) and freshwater bodies (wai). This paper draws on the findings of current research programmes funded by the New Zealand Agricultural Greenhouse Gas Research Centre (NZAGRC) and the Our Land & Water National Science Challenge (OLW NSC) to understand the impact of cultural knowledge and imperatives on agricultural GHG and freshwater mitigation and land-use change decisions. In particular, the research outlines mitigation and land-use change scenario decision support frameworks that model changes in emissions profiles (reductions in biogenic methane, nitrous oxide and nutrient emissions to freshwater) of agricultural and forestry production systems along with impacts on key economic indicators and socio-cultural factors. The paper also assesses the effectiveness of newly introduced partnership arrangements between Maori groups/organizations and key government agencies on policy co-design and implementation, and in particular, decisions to adopt mitigation practices and to diversify land use.

Keywords: co-design and implementation of environmental policy, indigenous environmental knowledge, Māori land tenure and agribusiness, mitigation and land use change decision support frameworks

Procedia PDF Downloads 209
3379 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning

Procedia PDF Downloads 290
3378 The Role of Metaphor in Communication

Authors: Fleura Shkëmbi, Valbona Treska

Abstract:

In elementary school, we discover that a metaphor is a decorative linguistic device just for poets. But now that we know, it's also a crucial tactic that individuals employ to understand the universe, from fundamental ideas like time and causation to the most pressing societal challenges today. Metaphor is the use of language to refer to something other than what it was originally intended for or what it "literally" means in order to suggest a similarity or establish a connection between the two. People do not identify metaphors as relevant in their decisions, according to a study on metaphor and its effect on decision-making; instead, they refer to more "substantive" (typically numerical) facts as the basis for their problem-solving decision. Every day, metaphors saturate our lives via language, cognition, and action. They argue that our conceptions shape our views and interactions with others and that concepts define our reality. Metaphor is thus a highly helpful tool for both describing our experiences to others and forming notions for ourselves. In therapeutic contexts, their shared goal appears to be twofold. The cognitivist approach to metaphor regards it as one of the fundamental foundations of human communication. The benefits and disadvantages of utilizing the metaphor differ depending on the target domain that the metaphor portrays. The challenge of creating messages and surroundings that affect customers' notions of abstract ideas in a variety of industries, including health, hospitality, romance, and money, has been studied for decades in marketing and consumer psychology. The aim of this study is to examine, through a systematic literature review, the role of the metaphor in communication and in advertising. This study offers a selected analysis of this literature, concentrating on research on customer attitudes and product appraisal. The analysis of the data identifies potential research questions. With theoretical and applied implications for marketing, design, and persuasion, this study sheds light on how, when, and for whom metaphoric communications are powerful.

Keywords: metaphor, communication, advertising, cognition, action

Procedia PDF Downloads 94
3377 Balanced Scorecard (BSC) Project : A Methodological Proposal for Decision Support in a Corporate Scenario

Authors: David de Oliveira Costa, Miguel Ângelo Lellis Moreira, Carlos Francisco Simões Gomes, Daniel Augusto de Moura Pereira, Marcos dos Santos

Abstract:

Strategic management is a fundamental process for global companies that intend to remain competitive in an increasingly dynamic and complex market. To do so, it is necessary to maintain alignment with their principles and values. The Balanced Scorecard (BSC) proposes to ensure that the overall business performance is based on different perspectives (financial, customer, internal processes, and learning and growth). However, relying solely on the BSC may not be enough to ensure the success of strategic management. It is essential that companies also evaluate and prioritize strategic projects that need to be implemented to ensure they are aligned with the business vision and contribute to achieving established goals and objectives. In this context, the proposition involves the incorporation of the SAPEVO-M multicriteria method to indicate the degree of relevance between different perspectives. Thus, the strategic objectives linked to these perspectives have greater weight in the classification of structural projects. Additionally, it is proposed to apply the concept of the Impact & Probability Matrix (I&PM) to structure and ensure that strategic projects are evaluated according to their relevance and impact on the business. By structuring the business's strategic management in this way, alignment and prioritization of projects and actions related to strategic planning are ensured. This ensures that resources are directed towards the most relevant and impactful initiatives. Therefore, the objective of this article is to present the proposal for integrating the BSC methodology, the SAPEVO-M multicriteria method, and the prioritization matrix to establish a concrete weighting of strategic planning and obtain coherence in defining strategic projects aligned with the business vision. This ensures a robust decision-making support process.

Keywords: MCDA process, prioritization problematic, corporate strategy, multicriteria method

Procedia PDF Downloads 75
3376 Gaualofa: Tsunami Impact and Samoan Grief Recovery

Authors: Byron Malaela Sotiata Seiuli

Abstract:

When a disaster strike, the resultant impact and devastation forces many people, particularly those directly affected, to re-examine the core dimensions of life that do not come from other life events. The way people respond to and try give meaning to their experiences resultant from the ruptures of trauma remains vital in grief recovery. On 29 October 2009, an earthquake of 8.3 magnitudes generated a galulolo (tsunami) wave that destroyed parts of American Samoa, Tonga and Samoa (previously Western Samoa). Aside from the physical and natural devastation, many people lost their lives and their livelihood. For health professionals who were called upon to provide psychosocial support, this calamity provided an ideal setting to examine and explore how those directly impacted recovered from the calamity. The experiences of a Samoan couple, Fia and Ola, becomes the key focus of this article, one that situates their mourning patterns and recovery journey in the context of Samoan culture. Examining grief from this perspective creates a cultural space to extend indigenous understanding on the complexities of grieving and customarily responses of Samoan people, like this couple, to disaster recovery.

Keywords: Fa'asamoa, galulolo, tsunami disaster, trauma and grief recovery, pacific psychology

Procedia PDF Downloads 198
3375 Multiple Version of Roman Domination in Graphs

Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera

Abstract:

In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.

Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values

Procedia PDF Downloads 104
3374 Design of Regular Communication Area for Infrared Electronic-Toll-Collection Systems

Authors: Wern-Yarng Shieh, Chao Qian, Bingnan Pei

Abstract:

A design of communication area for infrared electronic-toll-collection systems to provide an extended communication interval in the vehicle traveling direction and regular boundary between contiguous traffic lanes is proposed. By utilizing two typical low-cost commercial infrared LEDs with different half-intensity angles Φ1/2 = 22° and 10°, the radiation pattern of the emitter is designed to properly adjust the spatial distribution of the signal power. The aforementioned purpose can be achieved with an LED array in a three-piece structure with appropriate mounting angles. With this emitter, the influence of the mounting parameters, including the mounting height and mounting angles of the on-board unit and road-side unit, on the system performance in terms of the received signal strength and communication area are investigated. The results reveal that, for our emitter proposed in this paper, the ideal "long-and-narrow" characteristic of the communication area is very little affected by these mounting parameters. An optimum mounting configuration is also suggested.

Keywords: dedicated short-range communication (DSRC), electronic toll collection (ETC), infrared communication, intelligent transportation system (ITS), multilane free flow

Procedia PDF Downloads 332
3373 Jurisprudencial Analysis of Torture in Spain and in the European Human Rights System

Authors: María José Benítez Jiménez

Abstract:

Article 3 of the European Convention for the Protection of Human Rights and Fundamental Freedoms (E.C.H.R.) proclaims that no one may be subjected to torture, punishment or degrading treatment. The legislative correlate in Spain is embodied in Article 15 of the Spanish Constitution, and there must be an overlapping interpretation of both precepts on the ideal plane. While it is true that there are not many cases in which the European Court of Human Rights (E.C.t.H.R. (The Strasbourg Court)) has sanctioned Spain for its failure to investigate complaints of torture, it must be emphasized that the tendency to violate Article 3 of the Convention appears to be on the rise, being necessary to know possible factors that may be affecting it. This paper addresses the analysis of sentences that directly or indirectly reveal the violation of Article 3 of the European Convention. To carry out the analysis, sentences of the Strasbourg Court have been consulted from 2012 to 2016, being able to address any previous sentences to this period if it provided justified information necessary for the study. After the review it becomes clear that there are two key groups of subjects that request a response to the Strasbourg Court on the understanding that they have been tortured or degradingly treated. These are: immigrants and terrorists. Both phenomena, immigration and terrorism, respond to patterns that have mutated in recent years, and it is important for this study to know if national regulations begin to be dysfunctional.

Keywords: E.C.H.R., E.C.t.H.R. sentences, Spanish Constitution, torture

Procedia PDF Downloads 154
3372 Non-Contact Characterization of Standard Liquids Using Waveguide at 12.4 to18 Ghz Frequency Span

Authors: Kasra Khorsand-Kazemi, Bianca Vizcaino, Mandeep Chhajer Jain, Maryam Moradpour

Abstract:

This work presents an approach to characterize a non- contact microwave sensor using waveguides for different standard liquids such as ethanol, methanol and 2-propanol (Isopropyl Alcohol). Wideband waveguides operating between 12.4GHz to 18 GHz form the core of the sensing structure. Waveguides are sensitive to changes in conductivity of the sample under test (SUT), making them an ideal tool to characterize different polar liquids. As conductivity of the sample under test increase, the loss tangent of the material increase, thereby decreasing the S21 (dB) response of the waveguide. Among all the standard liquids measured, methanol exhibits the highest conductivity and 2-Propanol exhibits the lowest. The cutoff frequency measured for ethanol, 2-propanol, and methanol are 10.28 GHz, 10.32 GHz, and 10.38 GHz respectively. The measured results can be correlated with the loss tangent results of the standard liquid measured using the dielectric probe. This conclusively enables us to characterize different liquids using waveguides expanding the potential future applications in domains ranging from water quality management to bio-medical, chemistry and agriculture.

Keywords: Waveguides, , Microwave sensors, , Standard liquids characterization, Non-contact sensing

Procedia PDF Downloads 137
3371 Evaluating Contextually Targeted Advertising with Attention Measurement

Authors: John Hawkins, Graham Burton

Abstract:

Contextual targeting is a common strategy for advertising that places marketing messages in media locations that are expected to be aligned with the target audience. There are multiple major challenges to contextual targeting: the ideal categorisation scheme needs to be known, as well as the most appropriate subsections of that scheme for a given campaign or creative. In addition, the campaign reach is typically limited when targeting becomes narrow, so a balance must be struck between requirements. Finally, refinement of the process is limited by the use of evaluation methods that are either rapid but non-specific (click through rates), or reliable but slow and costly (conversions or brand recall studies). In this study we evaluate the use of attention measurement as a technique for understanding the performance of targeting on the basis of specific contextual topics. We perform the analysis using a large scale dataset of impressions categorised using the iAB V2.0 taxonomy. We evaluate multiple levels of the categorisation hierarchy, using categories at different positions within an initial creative specific ranking. The results illustrate that measuring attention time is an affective signal for the performance of a specific creative within a specific context. Performance is sustained across a ranking of categories from one period to another.

Keywords: contextual targeting, digital advertising, attention measurement, marketing performance

Procedia PDF Downloads 103
3370 Urban Growth and Its Impact on Natural Environment: A Geospatial Analysis of North Part of the UAE

Authors: Mohamed Bualhamam

Abstract:

Due to the complex nature of tourism resources of the Northern part of the United Arab Emirates (UAE), the potential of Geographical Information Systems (GIS) and Remote Sensing (RS) in resolving these issues was used. The study was an attempt to use existing GIS data layers to identify sensitive natural environment and archaeological heritage resources that may be threatened by increased urban growth and give some specific recommendations to protect the area. By identifying sensitive natural environment and archaeological heritage resources, public agencies and citizens are in a better position to successfully protect important natural lands and direct growth away from environmentally sensitive areas. The paper concludes that applications of GIS and RS in study of urban growth impact in tourism resources are a strong and effective tool that can aid in tourism planning and decision-making. The study area is one of the fastest growing regions in the country. The increase in population along the region, as well as rapid growth of towns, has increased the threat to natural resources and archeological sites. Satellite remote sensing data have been proven useful in assessing the natural resources and in monitoring the changes. The study used GIS and RS to identify sensitive natural environment and archaeological heritage resources that may be threatened by increased urban growth. The result of GIS analyses shows that the Northern part of the UAE has variety for tourism resources, which can use for future tourism development. Rapid urban development in the form of small towns and different economic activities are showing in different places in the study area. The urban development extended out of old towns and have negative affected of sensitive tourism resources in some areas. Tourism resources for the Northern part of the UAE is a highly complex resources, and thus requires tools that aid in effective decision making to come to terms with the competing economic, social, and environmental demands of sustainable development. The UAE government should prepare a tourism databases and a GIS system, so that planners can be accessed for archaeological heritage information as part of development planning processes. Applications of GIS in urban planning, tourism and recreation planning illustrate that GIS is a strong and effective tool that can aid in tourism planning and decision- making. The power of GIS lies not only in the ability to visualize spatial relationships, but also beyond the space to a holistic view of the world with its many interconnected components and complex relationships. The worst of the damage could have been avoided by recognizing suitable limits and adhering to some simple environmental guidelines and standards will successfully develop tourism in sustainable manner. By identifying sensitive natural environment and archaeological heritage resources of the Northern part of the UAE, public agencies and private citizens are in a better position to successfully protect important natural lands and direct growth away from environmentally sensitive areas.

Keywords: GIS, natural environment, UAE, urban growth

Procedia PDF Downloads 259
3369 Agent-Based Modelling to Improve Dairy-origin Beef Production: Model Description and Evaluation

Authors: Addisu H. Addis, Hugh T. Blair, Paul R. Kenyon, Stephen T. Morris, Nicola M. Schreurs, Dorian J. Garrick

Abstract:

Agent-based modeling (ABM) enables an in silico representation of complex systems and cap-tures agent behavior resulting from interaction with other agents and their environment. This study developed an ABM to represent a pasture-based beef cattle finishing systems in New Zea-land (NZ) using attributes of the rearer, finisher, and processor, as well as specific attributes of dairy-origin beef cattle. The model was parameterized using values representing 1% of NZ dairy-origin cattle, and 10% of rearers and finishers in NZ. The cattle agent consisted of 32% Holstein-Friesian, 50% Holstein-Friesian–Jersey crossbred, and 8% Jersey, with the remainder being other breeds. Rearers and finishers repetitively and simultaneously interacted to determine the type and number of cattle populating the finishing system. Rearers brought in four-day-old spring-born calves and reared them until 60 calves (representing a full truck load) on average had a live weight of 100 kg before selling them on to finishers. Finishers mainly attained weaners from rearers, or directly from dairy farmers when weaner demand was higher than the supply from rearers. Fast-growing cattle were sent for slaughter before the second winter, and the re-mainder were sent before their third winter. The model finished a higher number of bulls than heifers and steers, although it was 4% lower than the industry reported value. Holstein-Friesian and Holstein-Friesian–Jersey-crossbred cattle dominated the dairy-origin beef finishing system. Jersey cattle account for less than 5% of total processed beef cattle. Further studies to include re-tailer and consumer perspectives and other decision alternatives for finishing farms would im-prove the applicability of the model for decision-making processes.

Keywords: agent-based modelling, dairy cattle, beef finishing, rearers, finishers

Procedia PDF Downloads 93
3368 From Pink to Ink: Understanding the Decision-Making Process of Post-mastectomy Women Who Have Covered Their Scars with Decorative Tattoos

Authors: Fernanda Rodriguez

Abstract:

Breast cancer is pervasive among women, and an increasing number of women are opting for a mastectomy: a medical operation in which one or both breasts are removed with the intention of treating or averting breast cancer. However, there is an emerging population of cancer survivors in European nations that, rather than attempting to reconstruct their breasts to resemble as much as possible ‘normal’ breasts, have turned to dress their scars with decorative tattoos. At a practical level, this study hopes to improve the support systems of these women by possibly providing professionals in the medical field, tattoo artists, and family members of cancer survivors with a deeper understanding of their motivations and decision-making processes for choosing an alternative restorative route - such as decorative tattoos - after their mastectomy. At an intellectual level, however, this study aims to narrow a gap in the academic field concerning the relationship between mastectomies and alternative methods of healing, such as decorative tattoos, as well as to broaden the understanding regarding meaning-making and the ‘normal’ feminine body. Thus, by means of semi-structured interviews and a phenomenological standpoint, this research set itself the goal to understand why do women who have undergone a mastectomy choose to dress their scars with decorative tattoos instead of attempting to regain ‘normalcy’ through breast reconstruction or 3D areola tattoos? The results obtained from the interviews with fifteen women showed that the disillusionment with one part of the other of breast restoration techniques had led these women to find an alternative form of healing that allows them not only to close a painful chapter of their life but also to regain control over their bodies after a period of time in which agency was taking away from them. Decorative post-mastectomy tattoos allow these women to grant their bodies with new meanings and produce their own interpretation of their feminine body and identity.

Keywords: alternative femininity, decorative mastectomy tattoos, gender embodiment, social stigmatization

Procedia PDF Downloads 113
3367 Application of Environmental Justice Concept in Urban Planning, The Peri-Urban Environment of Tehran as the Case Study

Authors: Zahra Khodaee

Abstract:

Environmental Justice (EJ) concept consists of multifaceted movements, community struggles, and discourses in contemporary societies that seek to reduce environmental risks, increase environmental protections, and generally reduce environmental inequalities suffered by minority and poor communities; a term that incorporates ‘environmental racism’ and ‘environmental classism,’ captures the idea that different racial and socioeconomic groups experience differential access to environmental quality. This article explores environmental justice as an urban phenomenon in urban planning and applies it in peri-urban environment of a metropolis. Tehran peri-urban environments which are the result of meeting the city- village- nature systems or «city-village junction» have gradually faced effects such as accelerated environmental decline, changes without land-use plan, and severe service deficiencies. These problems are instances of environmental injustice which make the planners to adjust the problems and use and apply the appropriate strategies and policies by looking for solutions and resorting to theories, techniques and methods related to environmental justice. In order to access to this goal, try to define environmental justice through justice and determining environmental justice indices to analysis environmental injustice in case study. Then, make an effort to introduce some criteria to select case study in two micro and micro levels. Qiyamdasht town as the peri-urban environment of Tehran metropolis is chosen and examined to show the existence of environmental injustice by questionnaire analysis and SPSS software. Finally, use AIDA technique to design a strategic plan and reduce environmental injustice in case study by introducing the better scenario to be used in policy and decision making areas.

Keywords: environmental justice, metropolis of Tehran, Qiyam, Dasht peri, urban settlement, analysis of interconnected decision area (AIDA)

Procedia PDF Downloads 483
3366 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge

Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi

Abstract:

Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.

Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring

Procedia PDF Downloads 203
3365 Efficacy of TiO₂ in the Removal of an Acid Dye by Photo Catalytic Degradation

Authors: Laila Mahtout, Kerami Ahmed, Rabhi Souhila

Abstract:

The objective of this work is to reduce the impact on the environment of an acid dye (Black Eriochrome T) using catalytic photo-degradation in the presence of the semiconductor powder (TiO₂) previously characterized. A series of tests have been carried out in order to demonstrate the influence of certain parameters on the degree of dye degradation by titanium dioxide in the presence of UV rays, such as contact time, the powder mass and the pH of the solution. X-ray diffraction analysis of the powder showed that the anatase structure is predominant and the rutile phase is presented by peaks of low intensity. The various chemical groups which characterize the presence of the bands corresponding to the anatase and rutile form and other chemical functions have been detected by the Fourier Transform Infrared spectroscopy. The photo degradation of the NET by TiO₂ is very interesting because it gives encouraging results. The study of photo-degradation at different concentrations of the dye showed that the lower concentrations give better removal rates. The degree of degradation of the dye increases with increasing pH; it reaches the maximum value at pH = 9. The ideal mass of TiO₂ which gives the high removal rate is 1.2 g/l. Thermal treatment of TiO₂ with the addition of CuO with contents of 5%, 10%, and 15% respectively gives better results of degradation of the NET dye. The high percentage of elimination is observed at a CuO content of 15%.

Keywords: acid dye, ultraviolet rays, degradation, photocatalyse

Procedia PDF Downloads 190
3364 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques

Authors: Chandu Rathnayake, Isuri Anuradha

Abstract:

Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.

Keywords: CNN, random forest, decision tree, machine learning, deep learning

Procedia PDF Downloads 72
3363 Confirmatory Factor Analysis of Smartphone Addiction Inventory (SPAI) in the Yemeni Environment

Authors: Mohammed Al-Khadher

Abstract:

Currently, we are witnessing rapid advancements in the field of information and communications technology, forcing us, as psychologists, to combat the psychological and social effects of such developments. It also drives us to continually look for the development and preparation of measurement tools compatible with the changes brought about by the digital revolution. In this context, the current study aimed to identify the factor analysis of the Smartphone Addiction Inventory (SPAI) in the Republic of Yemen. The sample consisted of (1920) university students (1136 males and 784 females) who answered the inventory, and the data was analyzed using the statistical software (AMOS V25). The factor analysis results showed a goodness-of-fit of the data five-factor model with excellent indicators, as RMSEA-(.052), CFI-(.910), GFI-(.931), AGFI-(.915), TLI-(.897), NFI-(.895), RFI-(.880), and RMR-(.032). All within the ideal range to prove the model's fit of the scale’s factor analysis. The confirmatory factor analysis results showed factor loading in (4) items on (Time Spent), (4) items on (Compulsivity), (8) items on (Daily Life Interference), (5) items on (Craving), and (3) items on (Sleep interference); and all standard values of factor loading were statistically significant at the significance level (>.001).

Keywords: smartphone addiction inventory (SPAI), confirmatory factor analysis (CFA), yemeni students, people at risk of smartphone addiction

Procedia PDF Downloads 87
3362 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 140
3361 AI-Driven Solutions for Optimizing Master Data Management

Authors: Srinivas Vangari

Abstract:

In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.

Keywords: artificial intelligence, master data management, data governance, data quality

Procedia PDF Downloads 9
3360 Easy Way of Optimal Process-Storage Network Design

Authors: Gyeongbeom Yi

Abstract:

The purpose of this study is to introduce the analytic solution for determining the optimal capacity (lot-size) of a multiproduct, multistage production and inventory system to meet the finished product demand. Reasonable decision-making about the capacity of processes and storage units is an important subject for industry. The industrial solution for this subject is to use the classical economic lot sizing method, EOQ/EPQ (Economic Order Quantity/Economic Production Quantity) model, incorporated with practical experience. However, the unrealistic material flow assumption of the EOQ/EPQ model is not suitable for chemical plant design with highly interlinked processes and storage units. This study overcomes the limitation of the classical lot sizing method developed on the basis of the single product and single stage assumption. The superstructure of the plant considered consists of a network of serially and/or parallelly interlinked processes and storage units. The processes involve chemical reactions with multiple feedstock materials and multiple products as well as mixing, splitting or transportation of materials. The objective function for optimization is minimizing the total cost composed of setup and inventory holding costs as well as the capital costs of constructing processes and storage units. A novel production and inventory analysis method, PSW (Periodic Square Wave) model, is applied. The advantage of the PSW model comes from the fact that the model provides a set of simple analytic solutions in spite of a realistic description of the material flow between processes and storage units. The resulting simple analytic solution can greatly enhance the proper and quick investment decision for plant design and operation problem confronted in diverse economic situations.

Keywords: analytic solution, optimal design, process-storage network

Procedia PDF Downloads 326
3359 The Influence of Language and Background Culture on Speakers from the Viewpoint of Gender and Identity

Authors: Yuko Tomoto

Abstract:

The purpose of this research is to examine the assumption that female bilingual speakers more often change the way they talk or think depending on the language they use compared with male bilingual speakers. The author collected data through questionnaires on 241 bilingual speakers. Also, in-depth interview surveys were conducted with 13 Japanese/English bilingual speakers whose native language is Japanese and 16 English/Japanese bilingual speakers whose native language is English. The results indicate that both male and female bilingual speakers are more or less influenced consciously and unconsciously by the language they use, as well as by the background cultural values of each language. At the same time, it was found that female speakers are much more highly affected by the language they use, its background culture and also by the interlocutors they were talking to. This was probably due to the larger cultural expectations on women. Through conversations, speakers are not only conveying a message but also attempting to express who they are, and what they want to be like. In other words, they are constantly building up and updating their own identities by choosing the most appropriate language and descriptions to express themselves in the dialogues. It has been claimed that the images of ideal L2 self could strongly motivate learners. The author hopes to make the best use of the fact that bilingual speakers change their presence depending on the language they use, in order to motivate Japanese learners of English, especially female learners from the viewpoint of finding their new selves in English.

Keywords: cultural influence, gender expectation, language learning, L2 self

Procedia PDF Downloads 418