Search results for: decision tree algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6270

Search results for: decision tree algorithms

870 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 162
869 Intentions and Willingness of Marketing Professionals to Adopt Neuromarketing

Authors: Anka Gorgiev, Chris Martin, Nikolaos Dimitriadis, Dimitrios V. Nikolaidis

Abstract:

This paper is part of a doctoral research study aimed to identify behavioral indicators for the existence of the new marketing paradigm. Neuromarketing is becoming a growing trend in the marketing industry worldwide and it is capturing a lot of interest among the members of academia and the practitioner community. However, it is still not very clear how big of an impact neuromarketing might have in the following years. In an effort to get closer to an answer, this study investigates behavioral intentions and willingness to adopt neuromarketing and its practices by the marketing professionals, including academics, practitioners, students, researchers, experts and journal editors. The participants in the study include marketing professionals at different levels of neuromarketing fluency with residency in the United States of America and the South East Europe. The total of 19 participants participated in the interviews, all of whom belong to more than one group of marketing professionals. The authors use qualitative research approach and open-ended interview questions specifically developed to assess ideas, beliefs and opinions that marketing professionals hold towards neuromarketing. In constructing the interview questions, the authors have used the theory of planned behavior, the prototype willingness model and the technology acceptance model as a theoretical framework. Previous studies have not explicitly investigated the behavioral intentions of marketing professionals to engage in neuromarketing behavior, which is described here as a tendency to apply neuromarketing assumptions and tools in usual marketing practices. This study suggests that the marketing professionals believe that neuromarketing can contribute to the business in a positive way and outlines the main advantages and disadvantages of adopting neuromarketing as identified by the participants. In addition, the study reveals an emerging image of an exemplar company that is perceived to be using neuromarketing, including the most common characteristics and attributes. These findings are believed to be crucial in facilitating a way for neuromarketing field to have a broader impact than it currently does by recognizing and understanding the limitations that such exemplars imply and how that has an effect on the decision-making of marketing professionals.

Keywords: behavioral intentions, marketing paradigm, neuromarketing adoption, theory of planned behavior

Procedia PDF Downloads 167
868 Development of Pre-Mitigation Measures and Its Impact on Life-Cycle Cost of Facilities: Indian Scenario

Authors: Mahima Shrivastava, Soumya Kar, B. Swetha Malika, Lalu Saheb, M. Muthu Kumar, P. V. Ponambala Moorthi

Abstract:

Natural hazards and manmade destruction causes both economic and societal losses. Generalized pre-mitigation strategies introduced and adopted for prevention of disaster all over the world are capable of augmenting the resiliency and optimizing the life-cycle cost of facilities. In countries like India where varied topographical feature exists requires location specific mitigation measures and strategies to be followed for better enhancement by event-driven and code-driven approaches. Present state of vindication measures followed and adopted, lags dominance in accomplishing the required development. In addition, serious concern and debate over climate change plays a vital role in enhancing the need and requirement for the development of time bound adaptive mitigation measures. For the development of long-term sustainable policies incorporation of future climatic variation is inevitable. This will further assist in assessing the impact brought about by the climate change on life-cycle cost of facilities. This paper develops more definite region specific and time bound pre-mitigation measures, by reviewing the present state of mitigation measures in India and all over the world for improving life-cycle cost of facilities. For the development of region specific adoptive measures, Indian regions were divided based on multiple-calamity prone regions and geo-referencing tools were used to incorporate the effect of climate changes on life-cycle cost assessment. This study puts forward significant effort in establishing sustainable policies and helps decision makers in planning for pre-mitigation measures for different regions. It will further contribute towards evaluating the life cycle cost of facilities by adopting the developed measures.

Keywords: climate change, geo-referencing tools, life-cycle cost, multiple-calamity prone regions, pre-mitigation strategies, sustainable policies

Procedia PDF Downloads 370
867 Performance of AquaCrop Model for Simulating Maize Growth and Yield Under Varying Sowing Dates in Shire Area, North Ethiopia

Authors: Teklay Tesfay, Gebreyesus Brhane Tesfahunegn, Abadi Berhane, Selemawit Girmay

Abstract:

Adjusting the proper sowing date of a crop at a particular location with a changing climate is an essential management option to maximize crop yield. However, determining the optimum sowing date for rainfed maize production through field experimentation requires repeated trials for many years in different weather conditions and crop management. To avoid such long-term experimentation to determine the optimum sowing date, crop models such as AquaCrop are useful. Therefore, the overall objective of this study was to evaluate the performance of AquaCrop model in simulating maize productivity under varying sowing dates. A field experiment was conducted for two consecutive cropping seasons by deploying four maize seed sowing dates in a randomized complete block design with three replications. Input data required to run this model are stored as climate, crop, soil, and management files in the AquaCrop database and adjusted through the user interface. Observed data from separate field experiments was used to calibrate and validate the model. AquaCrop model was validated for its performance in simulating the green canopy and aboveground biomass of maize for the varying sowing dates based on the calibrated parameters. Results of the present study showed that there was a good agreement (an overall R2 =, Ef= d= RMSE =) between measured and simulated values of the canopy cover and biomass yields. Considering the overall values of the statistical test indicators, the performance of the model to predict maize growth and biomass yield was successful, and so this is a valuable tool help for decision-making. Hence, this calibrated and validated model is suggested to use for determining optimum maize crop sowing date for similar climate and soil conditions to the study area, instead of conducting long-term experimentation.

Keywords: AquaCrop model, calibration, validation, simulation

Procedia PDF Downloads 53
866 Field Management Solutions Supporting Foreman Executive Tasks

Authors: Maroua Sbiti, Karim Beddiar, Djaoued Beladjine, Romuald Perrault

Abstract:

Productivity is decreasing in construction compared to the manufacturing industry. It seems that the sector is suffering from organizational problems and have low maturity regarding technological advances. High international competition due to the growing context of globalization, complex projects, and shorter deadlines increases these challenges. Field employees are more exposed to coordination problems than design officers. Execution collaboration is then a major issue that can threaten the cost, time, and quality completion of a project. Initially, this paper will try to identify field professional requirements as to address building management process weaknesses such as the unreliability of scheduling, the fickleness of monitoring and inspection processes, the inaccuracy of project’s indicators, inconsistency of building documents and the random logistic management. Subsequently, we will focus our attention on providing solutions to improve scheduling, inspection, and hours tracking processes using emerging lean tools and field mobility applications that bring new perspectives in terms of cooperation. They have shown a great ability to connect various field teams and make informations visual and accessible to planify accurately and eliminate at the source the potential defects. In addition to software as a service use, the adoption of the human resource module of the Enterprise Resource Planning system can allow a meticulous time accounting and thus make the faster decision making. The next step is to integrate external data sources received from or destined to design engineers, logisticians, and suppliers in a holistic system. Creating a monolithic system that consolidates planning, quality, procurement, and resources management modules should be our ultimate target to build the construction industry supply chain.

Keywords: lean, last planner system, field mobility applications, construction productivity

Procedia PDF Downloads 106
865 Acculturation and Urban Related Identity of Turk and Kurd Internal Migrants

Authors: Melek Göregenli, Pelin Karakuş

Abstract:

This present study explored the acculturation strategies and urban related identity of Turk and Kurd internal migrants from different regions of Turkey who resettled in three big cities in the west. Besides we aimed at a comparative analysis of acculturation strategies and urban-related identity of voluntary and internally displaced Kurd migrants. Particularly we explored the role of migration type, satisfaction with migration decision, urban-related identity and several socio demographic variables as predictors of Kurds’ integration strategy preference. The sample consisted of 412 adult participants from Izmir (64 females, 86 males); Ankara (76 females, 75 males); and Istanbul (43 females, 64 males and four unreported). In terms of acculturation strategies, assimilation was found to be the most preferred acculturation attitude among Turks whereas separation was found to be most endorsed acculturation attitude among Kurds. The migrants in Izmir were found to prefer assimilation whereas the migrants in Ankara prefer separation. Concerning urban-related identity mean scores, Turks reported higher urban-related identity scores than Kurds. Furthermore the internal migrants in Izmir were found to score higher in urban-related identity than the migrants living in Istanbul and Ankara. The results of the regression analysis revealed that gender, length of residence and migration type were the significant predictors of integration preference of Kurds. Thus, whereas gender and migration type had significant negative associations; length of residence had positive significant associations with Kurds integration preference. Compared to female Kurds, male Kurds were found to be more integrated. Furthermore, voluntary Kurd migrants were more favour of integration than internally displaced Kurds. The findings supported the significant associations between acculturation strategies and urban-related identity with either group.

Keywords: acculturation, forced migration, internal displacement, internal migration, Turkey, urban-related identity

Procedia PDF Downloads 360
864 Suspended Sediment Concentration and Water Quality Monitoring Along Aswan High Dam Reservoir Using Remote Sensing

Authors: M. Aboalazayem, Essam A. Gouda, Ahmed M. Moussa, Amr E. Flifl

Abstract:

Field data collecting is considered one of the most difficult work due to the difficulty of accessing large zones such as large lakes. Also, it is well known that the cost of obtaining field data is very expensive. Remotely monitoring of lake water quality (WQ) provides an economically feasible approach comparing to field data collection. Researchers have shown that lake WQ can be properly monitored via Remote sensing (RS) analyses. Using satellite images as a method of WQ detection provides a realistic technique to measure quality parameters across huge areas. Landsat (LS) data provides full free access to often occurring and repeating satellite photos. This enables researchers to undertake large-scale temporal comparisons of parameters related to lake WQ. Satellite measurements have been extensively utilized to develop algorithms for predicting critical water quality parameters (WQPs). The goal of this paper is to use RS to derive WQ indicators in Aswan High Dam Reservoir (AHDR), which is considered Egypt's primary and strategic reservoir of freshwater. This study focuses on using Landsat8 (L-8) band surface reflectance (SR) observations to predict water-quality characteristics which are limited to Turbidity (TUR), total suspended solids (TSS), and chlorophyll-a (Chl-a). ArcGIS pro is used to retrieve L-8 SR data for the study region. Multiple linear regression analysis was used to derive new correlations between observed optical water-quality indicators in April and L-8 SR which were atmospherically corrected by values of various bands, band ratios, and or combinations. Field measurements taken in the month of May were used to validate WQP obtained from SR data of L-8 Operational Land Imager (OLI) satellite. The findings demonstrate a strong correlation between indicators of WQ and L-8 .For TUR, the best validation correlation with OLI SR bands blue, green, and red, were derived with high values of Coefficient of correlation (R2) and Root Mean Square Error (RMSE) equal 0.96 and 3.1 NTU, respectively. For TSS, Two equations were strongly correlated and verified with band ratios and combinations. A logarithm of the ratio of blue and green SR was determined to be the best performing model with values of R2 and RMSE equal to 0.9861 and 1.84 mg/l, respectively. For Chl-a, eight methods were presented for calculating its value within the study area. A mix of blue, red, shortwave infrared 1(SWR1) and panchromatic SR yielded the greatest validation results with values of R2 and RMSE equal 0.98 and 1.4 mg/l, respectively.

Keywords: remote sensing, landsat 8, nasser lake, water quality

Procedia PDF Downloads 86
863 Barriers to Access among Indigenous Women Seeking Prenatal Care: A Literature Review

Authors: Zarish Jawad, Nikita Chugh, Karina Dadar

Abstract:

Introduction: This paper aims to identify barriers indigenous women face in accessing prenatal care in Canada. It explores the differences in prenatal care received between indigenous and non-indigenous women. The objective is to look at changes or programs in Canada's healthcare system to reduce barriers to accessing safe prenatal care for indigenous women. Methods: A literature search of 12 papers was conducted using the following databases: PubMed, Medline, OVID, Google Scholar, and ScienceDirect. The studies included were written in English only, including indigenous females between the age of 19-35, and review articles were excluded. Participants in the studies examined did not have any severe underlying medical conditions for the duration of the study, and study designs included in the review are prospective cohort, cross-sectional, case report, and case-control studies. Results: Among all the barriers Indigenous women face in accessing prenatal care, the three most significant barriers Indigenous women face include a lack of culturally safe prenatal care, lack of services in the Indigenous community, proximity of prenatal facilities to Indigenous communities and costs of transportation. Discussion: The study found three significant barriers indigenous women face in accessing prenatal care in Canada; the geographical distribution of healthcare facilities, distrust between patients and healthcare professionals, and cultural sensitivity. Some of the suggested solutions include building more birthing and prenatal care facilities in rural areas for indigenous women, educating healthcare professionals on culturally sensitive healthcare, and involving indigenous people in the decision-making process to reduce distrust and power imbalances. Conclusion: The involvement of indigenous women and community leaders is important in making decisions regarding the implementation of effective healthcare and prenatal programs for indigenous women. However, further research is required to understand the effectiveness of the solutions and the barriers that make prenatal care less accessible for indigenous women in Canada.

Keywords: indigenous, maternal health, prenatal care, barriers

Procedia PDF Downloads 137
862 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 145
861 The Use of Coronary Calcium Scanning for Cholesterol Assessment and Management

Authors: Eva Kirzner

Abstract:

Based on outcome studies published over the past two decades, in 2018, the ACC/AHA published new guidelines for the management of hypercholesterolemia that incorporate the use of coronary artery calcium (CAC) scanning as a decision tool for ascertaining which patients may benefit from statin therapy. This use is based on the recognition that the absence of calcium on CAC scanning (i.e., a CAC score of zero) usually signifies the absence of significant atherosclerotic deposits in the coronary arteries. Specifically, in patients with a high risk for atherosclerotic cardiovascular disease (ASCVD), initiation of statin therapy is generally recommended to decrease ASCVD risk. However, among patients with intermediate ASCVD risk, the need for statin therapy is less certain. However, there is a need for new outcome studies that provide evidence that the management of hypercholesterolemia based on these new ACC/AHA recommendations is safe for patients. Based on a Pub-Med and Google Scholar literature search, four relevant population-based or patient-based cohort studies that studied the relationship between CAC scanning, risk assessment or mortality, and statin therapy that were published between 2017 and 2021 were identified (see references). In each of these studies, patients were assessed for their baseline risk for atherosclerotic cardiovascular disease (ASCVD) using the Pooled Cohorts Equation (PCE), an ACC/AHA calculator for determining patient risk based on assessment of patient age, gender, ethnicity, and coronary artery disease risk factors. The combined findings of these four studies provided concordant evidence that a zero CAC score defines patients who remain at low clinical risk despite the non-use of statin therapy. Thus, these new studies confirm the use of CAC scanning as a safe tool for reducing the potential overuse of statin therapy among patients with zero CAC scores. Incorporating these new data suggest the following best practice: (1) ascertain ASCVD risk according to the PCE in all patients; (2) following an initial attempt trial to lower ASCVD risk with optimal diet among patients with elevated ASCVD risk, initiate statin therapy for patients who have a high ASCVD risk score; (3) if the ASCVD score is intermediate, refer patients for CAC scanning; and (4) and if the CAC score is zero among the intermediate risk ASCVD patients, statin therapy can be safely withheld despite the presence of an elevated serum cholesterol level.

Keywords: cholesterol, cardiovascular disease, statin therapy, coronary calcium

Procedia PDF Downloads 102
860 Risk Assessment on Construction Management with “Fuzzy Logy“

Authors: Mehrdad Abkenari, Orod Zarrinkafsh, Mohsen Ramezan Shirazi

Abstract:

Construction projects initiate in complicated dynamic environments and, due to the close relationships between project parameters and the unknown outer environment, they are faced with several uncertainties and risks. Success in time, cost and quality in large scale construction projects is uncertain in consequence of technological constraints, large number of stakeholders, too much time required, great capital requirements and poor definition of the extent and scope of the project. Projects that are faced with such environments and uncertainties can be well managed through utilization of the concept of risk management in project’s life cycle. Although the concept of risk is dependent on the opinion and idea of management, it suggests the risks of not achieving the project objectives as well. Furthermore, project’s risk analysis discusses the risks of development of inappropriate reactions. Since evaluation and prioritization of construction projects has been a difficult task, the network structure is considered to be an appropriate approach to analyze complex systems; therefore, we have used this structure for analyzing and modeling the issue. On the other hand, we face inadequacy of data in deterministic circumstances, and additionally the expert’s opinions are usually mathematically vague and are introduced in the form of linguistic variables instead of numerical expression. Owing to the fact that fuzzy logic is used for expressing the vagueness and uncertainty, formulation of expert’s opinion in the form of fuzzy numbers can be an appropriate approach. In other words, the evaluation and prioritization of construction projects on the basis of risk factors in real world is a complicated issue with lots of ambiguous qualitative characteristics. In this study, evaluated and prioritization the risk parameters and factors with fuzzy logy method by combination of three method DEMATEL (Decision Making Trial and Evaluation), ANP (Analytic Network Process) and TOPSIS (Technique for Order-Preference by Similarity Ideal Solution) on Construction Management.

Keywords: fuzzy logy, risk, prioritization, assessment

Procedia PDF Downloads 583
859 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 220
858 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process

Authors: Chenhao Zhu

Abstract:

Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.

Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper

Procedia PDF Downloads 133
857 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: artificial neural network, back-propagation, tide data, training algorithm

Procedia PDF Downloads 470
856 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 197
855 Determining Optimum Locations for Runoff Water Harvesting in W. Watir, South Sinai, Using RS, GIS, and WMS Techniques

Authors: H. H. Elewa, E. M. Ramadan, A. M. Nosair

Abstract:

Rainfall water harvesting is considered as an important tool for overcoming water scarcity in arid and semi-arid region. Wadi Watir in the southeastern part of Sinai Peninsula is considered as one of the main and active basins in the Gulf of Aqaba drainage system. It is characterized by steep hills mainly consist of impermeable rocks, whereas the streambeds are covered by a highly permeable mixture of gravel and sand. A comprehensive approach involving the integration of geographic information systems, remote sensing and watershed modeling was followed to identify the RWH capability in this area. Eight thematic layers, viz volume of annual flood, overland flow distance, maximum flow distance, rock or soil infiltration, drainage frequency density, basin area, basin slope and basin length were used as a multi-parametric decision support system for conducting weighted spatial probability models (WSPMs) to determine the potential areas for the RWH. The WSPMs maps classified the area into five RWH potentiality classes ranging from the very low to very high. Three performed WSPMs' scenarios for W. Watir reflected identical results among their maps for the high and very high RWH potentiality classes, which are the most suitable ones for conducting surface water harvesting techniques. There is also a reasonable match with respect to the potentiality of runoff harvesting areas with a probability of moderate, low and very low among the three scenarios. WSPM results have shown that the high and very high classes, which are the most suitable for the RWH are representing approximately 40.23% of the total area of the basin. Accordingly, several locations were decided for the establishment of water harvesting dams and cisterns to improve the water conditions and living environment in the study area.

Keywords: Sinai, Wadi Watir, remote sensing, geographic information systems, watershed modeling, runoff water harvesting

Procedia PDF Downloads 351
854 The Functional Roles of Right Dorsolateral Prefrontal Cortex and Ventromedial Prefrontal Cortex in Risk-Taking Behavior

Authors: Aline M. Dantas, Alexander T. Sack, Elisabeth Bruggen, Peiran Jiao, Teresa Schuhmann

Abstract:

Risk-taking behavior has been associated with the activity of specific prefrontal regions of the brain, namely the right dorsolateral prefrontal cortex (DLPFC) and the ventromedial prefrontal cortex (VMPFC). While the deactivation of the rDLPFC has been shown to lead to increased risk-taking behavior, the functional relationship between VMPFC activity and risk-taking behavior is yet to be clarified. Correlational evidence suggests that the VMPFC is involved in valuation processes that involve risky choices, but evidence on the functional relationship is lacking. Therefore, this study uses brain stimulation to investigate the role of the VMPFC during risk-taking behavior and replicate the current findings regarding the role of the rDLPFC in this same phenomenon. We used continuous theta-burst stimulation (cTBS) to inhibit either the VMPFC or DLPFC during the execution of the computerized Maastricht Gambling Task (MGT) in a within-subject design with 30 participants. We analyzed the effects of such stimulation on risk-taking behavior, participants’ choices of probabilities and average values, and response time. We hypothesized that, compared to sham stimulation, VMPFC inhibition leads to a reduction in risk-taking behavior by reducing the appeal to higher-value options and, consequently, the attractiveness of riskier options. Right DLPFC (rDLPFC) inhibition, on the other hand, should lead to an increase in risk-taking due to a reduction in cognitive control, confirming existent findings. Stimulation of both the rDLPFC and the VMPFC led to an increase in risk-taking behavior and an increase in the average value chosen after both rDLPFC and VMPFC stimulation compared to sham. No significant effect on chosen probabilities was found. A significant increase in response time was observed exclusively after rDLPFC stimulation. Our results indicate that inhibiting DLPFC and VMPFC separately leads to similar effects, increasing both risk-taking behavior and average value choices, which is likely due to the strong anatomical and functional interconnection of the VMPFC and rDLPFC.

Keywords: decision-making, risk-taking behavior, brain stimulation, TMS

Procedia PDF Downloads 101
853 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 175
852 The Visualization of Hydrological and Hydraulic Models Based on the Platform of Autodesk Civil 3D

Authors: Xiyue Wang, Shaoning Yan

Abstract:

Cities in China today is faced with an increasingly serious river ecological crisis accompanying with the development of urbanization: waterlogging on account of the fragmented urban natural hydrological system; the limited ecological function of the hydrological system caused by a destruction of water system and waterfront ecological environment. Additionally, the eco-hydrological processes of rivers are affected by various environmental factors, which are more complex in the context of urban environment. Therefore, efficient hydrological monitoring and analysis tools, accurate and visual hydrological and hydraulic models are becoming more important basis for decision-makers and an important way for landscape architects to solve urban hydrological problems, formulating sustainable and forward-looking schemes. The study mainly introduces the river and flood analysis model based on the platform of Autodesk Civil 3D. Taking the Luanhe River in Qian'an City of Hebei Province as an example, the 3D models of the landform, river, embankment, shoal, pond, underground stream and other land features were initially built, with which the water transfer simulation analysis, river floodplain analysis, and river ecology analysis were carried out, ultimately the real-time visualized simulation and analysis of rivers in various hypothetical scenarios were realized. Through the establishment of digital hydrological and hydraulic model, the hydraulic data can be accurately and intuitively simulated, which provides basis for rational water system and benign urban ecological system design. Though, the hydrological and hydraulic model based on Autodesk Civil3D own its boundedness: the interaction between the model and other data and software is unfavorable; the huge amount of 3D data and the lack of basic data restrict the accuracy and application range. The hydrological and hydraulic model based on Autodesk Civil3D platform provides more possibility to access convenient and intelligent tool for urban planning and monitoring, a solid basis for further urban research and design.

Keywords: visualization, hydrological and hydraulic model, Autodesk Civil 3D, urban river

Procedia PDF Downloads 285
851 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 170
850 New Wine in an Old Bottle? Zhong-Yong Thinking and Creativity

Authors: Li-Fang CHou, Chun-Jung Tseng, Sung-Chun Tsai

Abstract:

Zhong-Yong represents unique values and cognitive beliefs of Chinese culture. Zhong-Yong thinking emphasizes (a) holistic thinking and perspective taking, (b) tolerance of contradictions, and (c) pursuance of a person’s interpersonal and inner harmony. With a unique way of naïve dialectical thinking based on Chinese culture, previous studies have found that people with higher Zhong-Yong thinking have more cognitive resources and resilience to make decision for dilemmas and cope stresses. Creativity is defined as the behavior to create novel and value products and viewed as the most important capital for individuals and enterprises. However, the relationship between Zhong-Yong thinking and creativity is still remaining to be unexplored. Three studies were conducted to explore the effects of Zhong-Yong thinking on creativity. In Study1, with 87 undergraduate students from a university in southern Taiwan as participants, we used questionnaire to measure Zhong-Yong thinking and processed creative task (unusual uses task) to get indicators of fluency and flexibility. After controlling background and openness to experience of Big five, the results showed that Zhong-Yong thinking had significant positive effects on fluency and flexibility. In Study 2, 97 undergraduate students were recruited to do Zhong-Yong thinking task and creative task. The result showed that, compared with control group, the participants had higher creative performance after being primed with Zhong-Yong thinking. In Study 3, we adopted questionnaire survey and took 397 employees from private enterprises in Taiwan as sample. Besides the main effects of Zhong-Yong thinking, the moderating effects on the relationship between leadership behavior and employee’s creative performance were also investigated. We found that (a) Zhong-Yong thinking was positively associated to creative performance; (b) Zhong-Yong thinking strengthened the positive effects of transformational and authoritative leadership on creative performance. Finally, the implications of theory/practice and limitations/future directions were also discussed.

Keywords: Zhong-Yong thinking, creativity and creative performance, unusual uses task, transformational leadership, authoritative leadership

Procedia PDF Downloads 567
849 Cost-Effectiveness Analysis of the Use of COBLATION™ Knee Chondroplasty versus Mechanical Debridement in German Patients

Authors: Ayoade Adeyemi, Leo Nherera, Paul Trueman, Antje Emmermann

Abstract:

Background and objectives: Radiofrequency (RF) generated plasma chondroplasty is considered a promising treatment alternative to mechanical debridement (MD) with a shaver. The aim of the study was to perform a cost-effectiveness analysis comparing costs and outcomes following COBLATION chondroplasty versus mechanical debridement in patients with knee pain associated with a medial meniscus tear and idiopathic ICRS grade III focal lesion of the medial femoral condyle from a payer perspective. Methods: A decision-analytic model was developed comparing economic and clinical outcomes between the two treatment options in German patients following knee chondroplasty. Revision rates based on the frequency of repeat arthroscopy, osteotomy and conversion to total knee replacement, reimbursement costs and outcomes data over a 4-year time horizon were extracted from published literature. One-way sensitivity analyses were conducted to assess uncertainties around model parameters. Threshold analysis determined the revision rate at which model results change. All costs were reported in 2016 euros, future costs were discounted at a 3% annual rate. Results: Over a 4 year period, COBLATION chondroplasty resulted in an overall net saving cost of €461 due to a lower revision rate of 14% compared to 48% with MD. Threshold analysis showed that both options were associated with comparable costs if COBLATION revision rate was assumed to increase up to 23%. The initial procedure costs for COBLATION were higher compared to MD and outcome scores were significantly improved at 1 and 4 years post-operation versus MD. Conclusion: The analysis shows that COBLATION chondroplasty is a cost-effective option compared to mechanical debridement in the treatment of patients with a medial meniscus tear and idiopathic ICRS grade III defect of the medial femoral condyle.

Keywords: COBLATION, cost-effectiveness, knee chondroplasty, mechanical debridement

Procedia PDF Downloads 381
848 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network

Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour

Abstract:

Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.

Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network

Procedia PDF Downloads 162
847 Use of Transportation Networks to Optimize The Profit Dynamics of the Product Distribution

Authors: S. Jayasinghe, R. B. N. Dissanayake

Abstract:

Optimization modelling together with the Network models and Linear Programming techniques is a powerful tool in problem solving and decision making in real world applications. This study developed a mathematical model to optimize the net profit by minimizing the transportation cost. This model focuses the transportation among decentralized production plants to a centralized distribution centre and then the distribution among island wide agencies considering the customer satisfaction as a requirement. This company produces basically 9 types of food items with 82 different varieties and 4 types of non-food items with 34 different varieties. Among 6 production plants, 4 were located near the city of Mawanella and the other 2 were located in Galewala and Anuradhapura cities which are 80 km and 150 km away from Mawanella respectively. The warehouse located in the Mawanella was the main production plant and also the only distribution plant. This plant distributes manufactured products to 39 agencies island-wide. The average values and average amount of the goods for 6 consecutive months from May 2013 to October 2013 were collected and then average demand values were calculated. The following constraints are used as the necessary requirement to satisfy the optimum condition of the model; there was one source, 39 destinations and supply and demand for all the agencies are equal. Using transport cost for a kilometer, total transport cost was calculated. Then the model was formulated using distance and flow of the distribution. Network optimization and linear programming techniques were used to originate the model while excel solver is used in solving. Results showed that company requires total transport cost of Rs. 146, 943, 034.50 to fulfil the customers’ requirement for a month. This is very much less when compared with data without using the model. Model also proved that company can reduce their transportation cost by 6% when distributing to island-wide customers. Company generally satisfies their customers’ requirements by 85%. This satisfaction can be increased up to 97% by using this model. Therefore this model can be used by other similar companies in order to reduce the transportation cost.

Keywords: mathematical model, network optimization, linear programming

Procedia PDF Downloads 337
846 Effective Coaching for Teachers of English Language Learners: A Gap Analysis Framework

Authors: Armando T. Zúñiga

Abstract:

As the number of English Language Learners (ELLs) in public schools continues to grow, so does the achievement gap between ELLs and other student populations. In an effort to support classroom teachers with effective instructional strategies for this student population, many districts have created instructional coaching positions specifically to support classroom teachers of ELLs—ELL Teachers on Special Assignment (ELL TOSAs). This study employed a gap analysis framework to the ELL TOSA professional support program in one California school district to examine knowledge, motivation, and organizational influences (KMO) on the ELL TOSAs’ goal of supporting classroom teachers of ELLs. Three themes emerged as a result of data analysis. First, there was evidence to illustrate the interaction between knowledge and the organization. Data from ELL TOSAs indicated an understanding of the role that collaboration plays in coaching and how to operationalize it in their support of teachers. Further, all of the ELL TOSAs indicated they have received professional development on effective strategies for instructional coaching. Additionally, a large percentage of the ELL TOSAs indicated a knowledge of modeling as an effective coaching practice. Accordingly, all of the ELL TOSAs indicated that they had knowledge of feedback as an effective coaching strategy. However, there was not sufficient evidence to support that they learned the latter two strategies through professional development. A second theme surfaced as there was evidence to illustrate an interaction between motivation and the organization. Some ELL TOSAs indicated that their sense of self-efficacy was affected by conflicting roles and expectations for the job. Most of the ELL TOSAs indicated that their sense of self-efficacy was affected by an increased workload brought about by fiscal decision making. Finally, there was evidence illustrating the interaction between the organization and motivation. The majority of the of ELL TOSAs indicated that there is confusion about how their roles are perceived, leaving the ELL TOSAs to feel that their actions did not contribute to instructional change. In conclusion, five research-based recommendations to support ELL TOSA goal attainment and considerations for future research on instructional coaches for classroom teachers of ELLs are provided.

Keywords: English language development, English language acquisition, language and leadership, language coaching, English language learners, second language acquisition

Procedia PDF Downloads 93
845 Assessing Livelihood Vulnerability to Climate Change and Adaptation Strategies in Rajanpur District, Pakistan

Authors: Muhammad Afzal, Shahbaz Mushtaq, Duc-Anh-An-Vo, Kathryn Reardon Smith, Thanh Ma

Abstract:

Climate change has become one of the most challenging environmental issues in the 21st century. Climate change-induced natural disasters, especially floods, are the major factors of livelihood vulnerability, impacting millions of individuals worldwide. Evaluating and mitigating the effects of floods requires an in-depth understanding of the relationship between vulnerability and livelihood capital assets. Using an integrated approach, sustainable livelihood framework, and system thinking approach, the study developed a conceptual model of a generalized livelihood system in District Rajanpur, Pakistan. The model visualizes the livelihood vulnerability system as a whole and identifies the key feedback loops likely to influence the livelihood vulnerability. The study suggests that such conceptual models provide effective communication and understanding tools to stakeholders and decision-makers to anticipate the problem and design appropriate policies. It can also serve as an evaluation technique for rural livelihood policy and identify key systematic interventions. The key finding of the study reveals that household income, health, and education are the major factors behind the livelihood vulnerability of the rural poor of District Rajanpur. The Pakistani government tried to reduce the livelihood vulnerability of the region through different income, health, and education programs, but still, many changes are required to make these programs more effective especially during the flood times. The government provided only cash to vulnerable and marginalized families through income support programs, but this study suggests that along with the cash, the government must provide seed storage facilities and access to crop insurance to the farmers. Similarly, the government should establish basic health units in villages and frequent visits of medical mobile vans should be arranged with advanced medical lab facilities during and after the flood.

Keywords: livelihood vulnerability, rural communities, flood, sustainable livelihood framework, system dynamics, Pakistan

Procedia PDF Downloads 42
844 Absolute Lymphocyte Count as Predictor of Pneumocystis Pneumonia in Patients With Unknown HIV Status at a Private Tertiary Hospital

Authors: Marja A. Bernardo, Coreena A. Bueser, Cybele Lara R. Abad, Raul V. Destura

Abstract:

Pneumocystis jirovecii pneumonia (PCP) is the most common opportunistic infection among people with HIV. Early consideration of PCP should be made even in patients whose HIV status is unknown as delay in treatment may be fatal. The use of absolute lymphocyte count (ALC) has been suggested as an alternative predictor of PCP especially in resource limited settings where PCR testing is costly or delayed. Objective: To determine whether the absolute lymphocyte count (ALC) can be used as a screening tool to predict Pneumocystis pneumonia in patients with unknown HIV status admitted at a private tertiary hospital. Methods: A retrospective cross-sectional study was conducted at a private tertiary medical center. Inpatient medical records of patients aged 18 years old and above from January 2012 to May 2014, in whom a clinical diagnosis of Pneumocystis jirovecii pneumonia was made were reviewed for inclusion. Demographic data, clinical features, hospital course, PCP PCR and HIV results were recorded. Independent t-test and chi-square analysis was used to determine any statistical difference between PCP-positive and PCP-negative groups. Mann-Whitney U-test was used for comparison of hospital stay. Results: There were no statistically significant differences in baseline characteristics between PCP positive and negative groups. While both the percent lymphocyte count (0.14 ± 0.13 vs 0.21 ± 0.16) and ALC (1160 ± 528.67 vs 1493.70 ± 988.61) were lower for the PCP-positive group, only the percent lymphocyte count reached a statistically significant difference (p= 0.067 vs p= 0.042). Conclusion: A quick determination of the ALC may be useful as an additional parameter to help screen for and diagnose pneumocystis pneumonia. In our study, the ALC of patients with PCP appear to be lower than in patients without PCP. A low ALC (e.g. below 1200) may help with the decision regarding empiric treatment. However, it should be used in conjunction with the patient’s clinical presentation, as well as other diagnostic tests. Larger, prospective studies incorporating the ALC with other clinical predictors are necessary to optimally predict those who would benefit from empiric or expedited management for potential PCP.

Keywords: Pneumocystis carinii pneumonia, Absolute Lymphocyte Count, infection, PCP

Procedia PDF Downloads 346
843 The Liberal Tension of the Adversarial Criminal ‎Procedure

Authors: Benjamin Newman

Abstract:

The picture of an adverse contest between two parties has often been used as an archetypal description of the Anglo-American adversarial criminal trial. However, in actuality, guilty pleas and plea-bargains have been dominating the procedure for over the last half-a-century. Characterised by two adverse parties, the court adjudicative system in the Anglo-American world adhere to the adversarial procedure, and while further features have been attributed and the values that are embedded within the procedure vary, it is a system that we have no adequate theory. Damaska had argued that the adversarial conflict-resolution mode of administration of justice stems from a liberal laissez-faire concept of a value neutral liberal state. Having said that, the court’s neutrality has been additionally rationalised in light of its liberal end as a safeguard from the state’s coercive force. Both conceptions of the court’s neutrality conflict in cases where the by-standing role disposes of its liberal duty in safeguarding the individual. Such is noticeable in plea bargains, where the defendant has the liberty to plead guilty, despite concerns over wrongful convictions and deprivation of liberty. It is an inner liberal tension within the notion of criminal adversarialism, between the laissez-faire mode which grants autonomy to the parties and the safeguarding liberal end of the trial. Langbein had asserted that the adversarial system is a criminal procedure for which we have no adequate theory, and it is by reference to political and moral theories that the research aims to articulate a normative account. The paper contemplates on the above liberal-tension, and by reference to Duff’s ‘calling-to-account’ theory, argues that autonomy is of inherent value to the criminal process, being considered a constitutive element in the process of being called to account. While the aspiration is that the defendant’s guilty plea should be genuine, the guilty-plea decision must be voluntary if it is to be considered a performative act of accountability. Thus, by valuing procedural autonomy as a necessary element within the criminal adjudicative process, it assimilates a liberal procedure, whilst maintaining the liberal end by holding the defendant to account.

Keywords: liberal theory, adversarial criminal procedure, criminal law theory, liberal perfectionism, political liberalism

Procedia PDF Downloads 83
842 Fostering Positive Mindset: Grounded Theory Study of Self-Awareness in Emerging Adults

Authors: Maha Ben Salem

Abstract:

The transformative aspect of emerging adulthood brings about a development of self-processes, including changes in self-esteem and personal goals. Success in this life stage entails the emotional growth necessary to navigate the demands and challenges of college life. Understanding the concept of self-awareness within this particular age group sheds light on emerging adults’ internal world and the transformative aspect of their emotional growth. Uncovering the thoughts' processes that foster or hinder self-awareness is important to the understanding of how emerging adults learn to make themselves positive or negative. However, existing research in self-awareness has explored this phenomenon mostly using quantitative research methodology or through tying an individual’s level of self-awareness to specific actions or outcomes. Little is known about the process of how college students emerging adults notice and monitor their inner thoughts and emotions. Methodology and theoretical orientation: A grounded theory study using in-depth semi-structured interview was utilized. Nine interviews have been conducted. A constructionist framework was employed to generate a theory as for how self-awareness facilitates specific patterns of thinking in emerging adults. The choice of grounded theory emanates from a lack of knowledge regarding underlying thinking procedures and internal states that emerging adult college students navigate in an attempt to make meaning out of the new academic experience and life stage. Findings: Initial data analysis generated the following categories of the theory: (a) a non-judgmental perception of negative thinking and negative emotions that allow for a better understanding of the self; (b) negative state of mind is easy to overcome when it is accepted and acknowledged; (c) knowledge of the actual and desired self-generates an intentional decision to shift to a positive mindset. Preliminary findings indicate that college academic and social environment foster a new understanding of the self that yield a change in mindset and in self-knowledge.

Keywords: college environment, emergent adults, grounded theory, positive mindset, self-awareness

Procedia PDF Downloads 122
841 Local and Global Sustainability: the Case-Study of Beja Municipality Local Agenda 21 Operationalization Challenges

Authors: Maria Inês Faria, João Miguel Simão

Abstract:

Frequently, the Sustainable Development paradigm is considered the contemporary societies flag and is has been assuming different nuances on local and global dialogues. This reveals the ambivalent character associated to its implementation due, namely, to the kind of synergies that political institutions, social organizations and citizenry can actually create. The Sustainable Development concept needs further discussion so that it can be useful in decision-making processes. In fact, the polysemic nature of this concept has consistently undermined its credibility leading, among other factors, to the talk and action gap, as well as to misappropriations of this notion. The present study focuses on the importance in questioning the sustainable development operationalization, "To walk the talk", and intends, in a broad sense, identify prospects and the elements of sustainability that are included in strategic plans (global, national and local) and, in the strict sense, confront discourse and practice in the context of local public policies for sustainable development, in particular with regard to the implementation of Local Agenda 21 in the municipality of Beja (Portugal) in order to analyze at what extent the strategies adopted and implemented are aligned with the paradigm of sustainable development. The method is based on critical analysis of literature and official documentation, using three complementary approaches: a) exploratory review of literature in order to identify publications on sustainability and sustainable development; b) this second approach complements the first, focused on the official documentation for the adoption and implementation of sustainable development, which is produced in the global plan, regional, national and local levels; c) and the approach which is focused on official documentation that expresses the policy options, the strategic lines and actions for sustainable development implementation Beja´s Municipality. The main results of this study highlight the type of alignment of the Beja´s Municipality sustainable policies, concerning the officially stipulated for the promotion of sustainable development on the international agenda, stressing the potentialities, constraints and challenges of Agenda 21 Local implementation.

Keywords: sustainable development, Local Agenda 21, sustainable local public policies, Beja

Procedia PDF Downloads 269