Search results for: LCA uncertainty
406 Post Covid-19 Landscape of Global Pharmaceutical Industry
Authors: Abu Zafor Sadek
Abstract:
Pharmaceuticals were one of the least impacted business sectors during the corona pandemic as they are the center point of Covid-19 fight. Emergency use authorization, unproven indication of some commonly used drugs, self-medication, research and production capacity of an individual country, capacity of producing vaccine by many countries, Active Pharmaceutical Ingredients (APIs) related uncertainty, information gap among manufacturer, practitioners and user, export restriction, duration of lock-down, lack of harmony in transportation, disruption in the regulatory approval process, sudden increased demand of hospital items and protective equipment, panic buying, difficulties in in-person product promotion, e-prescription, geo-politics and associated issues added a new dimension to this industry. Although the industry maintains a reasonable growth throughout Covid-19 days; however, it has been characterized by both long- and short-term effects. Short-term effects have already been visible to so many countries, especially those who are import-dependent and have limited research capacity. On the other hand, it will take a few more time to see the long-term effects. Nevertheless, supply chain disruption, changes in strategic planning, new communication model, squeezing of job opportunity, rapid digitalization are the major short-term effects, whereas long-term effects include a shift towards self-sufficiency, growth pattern changes of certain products, special attention towards clinical studies, automation in operations, the increased arena of ethical issues etc. Therefore, this qualitative and exploratory study identifies the post-covid-19 landscape of the global pharmaceutical industry.Keywords: covid-19, pharmaceutical, businees, landscape
Procedia PDF Downloads 92405 An Application of Bidirectional Option Contract to Coordinate a Dyadic Fashion Apparel Supply Chain
Authors: Arnab Adhikari, Arnab Bisi
Abstract:
Since the inception, the fashion apparel supply chain is facing the problem of high demand uncertainty. Often the demand volatility compels the corresponding supply chain member to incur substantial holding cost and opportunity cost in case of the overproduction and the underproduction scenario, respectively. It leads to an uncoordinated fashion apparel supply chain. There exist several scholarly works to achieve coordination in the fashion apparel supply chain by employing the different contracts such as the buyback contract, the revenue sharing contract, the option contract, and so on. Specially, the application of option contract in the apparel industry becomes prevalent with the changing global scenario. Exploration of existing literature related to the option contract reveals that most of the research works concentrate on the one direction demand adjustment i.e. either to match the demand upwards or downwards. Here, we present a holistic approach to coordinate a dyadic fashion apparel supply chain comprising one manufacturer and one retailer with the help of bidirectional option contract. We show a combination of wholesale price contract and bidirectional option contract can coordinate the under expanded supply chain. We also propose a framework that captures the variation of the apparel retailer’s order quantity and the apparel manufacturer’s production quantity with the changing exercise price for the different ranges of the option price. We analytically explore that corresponding cost parameters of the supply chain members along with the nature of demand distribution play an instrumental role in the coordination as well as the retailer’s ordering decision.Keywords: fashion apparel supply chain, supply chain coordination, wholesale price contract, bidirectional option contract
Procedia PDF Downloads 441404 Vulnerability Assessment of Reinforced Concrete Frames Based on Inelastic Spectral Displacement
Authors: Chao Xu
Abstract:
Selecting ground motion intensity measures reasonably is one of the very important issues to affect the input ground motions selecting and the reliability of vulnerability analysis results. In this paper, inelastic spectral displacement is used as an alternative intensity measure to characterize the ground motion damage potential. The inelastic spectral displacement is calculated based modal pushover analysis and inelastic spectral displacement based incremental dynamic analysis is developed. Probability seismic demand analysis of a six story and an eleven story RC frame are carried out through cloud analysis and advanced incremental dynamic analysis. The sufficiency and efficiency of inelastic spectral displacement are investigated by means of regression and residual analysis, and compared with elastic spectral displacement. Vulnerability curves are developed based on inelastic spectral displacement. The study shows that inelastic spectral displacement reflects the impact of different frequency components with periods larger than fundamental period on inelastic structural response. The damage potential of ground motion on structures with fundamental period prolonging caused by structural soften can be caught by inelastic spectral displacement. To be compared with elastic spectral displacement, inelastic spectral displacement is a more sufficient and efficient intensity measure, which reduces the uncertainty of vulnerability analysis and the impact of input ground motion selection on vulnerability analysis result.Keywords: vulnerability, probability seismic demand analysis, ground motion intensity measure, sufficiency, efficiency, inelastic time history analysis
Procedia PDF Downloads 354403 The Effects of Time and Cyclic Loading to the Axial Capacity for Offshore Pile in Shallow Gas
Authors: Christian H. Girsang, M. Razi B. Mansoor, Noorizal N. Huang
Abstract:
An offshore platform was installed in 1977 at about 260km offshore West Malaysia at the water depth of 73.6m. Twelve (12) piles were installed with four (4) are skirt piles. The piles have 1.219m outside diameter and wall thickness of 31mm and were driven to 109m below seabed. Deterministic analyses of the pile capacity under axial loading were conducted using the current API (American Petroleum Institute) method and the four (4) CPT-based methods: the ICP (Imperial College Pile)-method, the NGI (Norwegian Geotechnical Institute)-Method, the UWA (University of Western Australia)-method and the Fugro-method. A statistical analysis of the model uncertainty associated with each pile capacity method was performed. There were two (2) piles analysed: Pile 1 and piles other than Pile 1, where Pile 1 is the pile that was most affected by shallow gas problems. Using the mean estimate of soil properties, the five (5) methods used for deterministic estimation of axial pile capacity in compression predict an axial capacity from 28 to 42MN for Pile 1 and 32 to 49MN for piles other than Pile 1. These values refer to the static capacity shortly after pile installation. They do not include the effects of cyclic loading during the design storm or time after installation on the axial pile capacity. On average, the axial pile capacity is expected to have increased by about 40% because of ageing since the installation of the platform in 1977. On the other hand, the cyclic loading effects during the design storm may reduce the axial capacity of the piles by around 25%. The study concluded that all piles have sufficient safety factor when the pile aging and cyclic loading effect are considered, as all safety factors are above 2.0 for maximum operating and storm loads.Keywords: axial capacity, cyclic loading, pile ageing, shallow gas
Procedia PDF Downloads 345402 Energy Enterprise Information System for Strategic Decision-Making
Authors: Woosik Jang, Seung H. Han, Seung Won Baek, Chan Young Park
Abstract:
Natural gas (NG) is a local energy resource that exists in certain countries, and most NG producers operate within unstable governments. Moreover, about 90% of the liquefied natural gas (LNG) market is governed by a small number of international oil companies (IOCs) and national oil companies (NOCs), market entry of second movers is extremely limited. To overcome these barriers, project viability should be assessed based on limited information at the project screening perspective. However, there have been difficulties at the early stages of projects as follows: (1) What factors should be considered? (2) How many experts are needed to make a decision? and (3) How to make an optimal decision with limited information? To answer these questions, this research suggests a LNG project viability assessment model based on the Dempster-Shafer theory (DST). Total of 11 indices for the gas field analysis and 23 indices for the market environment analysis are identified that reflect unique characteristics of LNG industry. Moreover, the proposed model evaluates LNG projects based on questionnaire survey and it provides not only quantified results but also uncertainty level of results based on DST. Consequently, the proposed model as a systematic framework can support the decision-making process from the gas field projects using quantitative results, and it is developed to a stand-alone system to enhance the practical usability. It is expected to improve the decision-making quality and opportunity in LNG projects for enterprise through informed decision.Keywords: project viability, LNG project, enterprise information system, Dempster-Shafer Theory, strategic decision-making
Procedia PDF Downloads 258401 Cognitive Model of Analogy Based on Operation of the Brain Cells: Glial, Axons and Neurons
Authors: Ozgu Hafizoglu
Abstract:
Analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with attributional, deep structural, casual relations that are essential to learning, to innovation in artificial worlds, and to discovery in science. Cognitive Model of Analogy (CMA) leads and creates information pattern transfer within and between domains and disciplines in science. This paper demonstrates the Cognitive Model of Analogy (CMA) as an evolutionary approach to scientific research. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions. In this paper, the model of analogical reasoning is created based on brain cells, their fractal, and operational forms within the system itself. Visualization techniques are used to show correspondences. Distinct phases of the problem-solving processes are divided thusly: encoding, mapping, inference, and response. The system is revealed relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain cells: glial cells, axons, axon terminals, and neurons, relative to matching conditions of analogical reasoning and relational information. It’s found that encoding, mapping, inference, and response processes in four-term analogical reasoning are corresponding with the fractal and operational forms of brain cells: glial, axons, and neurons.Keywords: analogy, analogical reasoning, cognitive model, brain and glials
Procedia PDF Downloads 185400 Modelling Conceptual Quantities Using Support Vector Machines
Authors: Ka C. Lam, Oluwafunmibi S. Idowu
Abstract:
Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression
Procedia PDF Downloads 206399 LanE-change Path Planning of Autonomous Driving Using Model-Based Optimization, Deep Reinforcement Learning and 5G Vehicle-to-Vehicle Communications
Authors: William Li
Abstract:
Lane-change path planning is a crucial and yet complex task in autonomous driving. The traditional path planning approach based on a system of carefully-crafted rules to cover various driving scenarios becomes unwieldy as more and more rules are added to deal with exceptions and corner cases. This paper proposes to divide the entire path planning to two stages. In the first stage the ego vehicle travels longitudinally in the source lane to reach a safe state. In the second stage the ego vehicle makes lateral lane-change maneuver to the target lane. The paper derives the safe state conditions based on lateral lane-change maneuver calculation to ensure collision free in the second stage. To determine the acceleration sequence that minimizes the time to reach a safe state in the first stage, the paper proposes three schemes, namely, kinetic model based optimization, deep reinforcement learning, and 5G vehicle-to-vehicle (V2V) communications. The paper investigates these schemes via simulation. The model-based optimization is sensitive to the model assumptions. The deep reinforcement learning is more flexible in handling scenarios beyond the model assumed by the optimization. The 5G V2V eliminates uncertainty in predicting future behaviors of surrounding vehicles by sharing driving intents and enabling cooperative driving.Keywords: lane change, path planning, autonomous driving, deep reinforcement learning, 5G, V2V communications, connected vehicles
Procedia PDF Downloads 252398 There's No End in Sight: An Interpretative Phenomenological Analysis of Quality of Life in Burning Syndrome Sufferers
Authors: R. McGrath, A. Trace, S. Curtin, C. McCreary
Abstract:
Introduction: Although, in relation to Burning Mouth Syndrome (BMS), much energy has been expended on its definition and etiology, it still remains a contentious issue. There is agreement on the symptoms, but on little else; and approaches to treatment vary widely. However, it has been established that the condition has a detrimental effect on the sufferer’s quality of life. Much research focus has been put on the physical impact of the syndrome. Recently, some literature has turned the focus to social, functional, and psychological factors. However, there is very little qualitative research on how burning mouth syndrome affects the lives of sufferer’s and the present study seeks to remedy this. Method: The study recruited five male participants who took part in semi-structured interviews lasting between 30 and 50 minutes. Data was analysed using Interpretative Phenomenological Analysis. Results: The study identified four super-ordinate themes: Lack of Control due to Uncertainty about Condition; Disruption to Internal Sense of Self; Negative Future Expectation due to Chronic Symptoms; and Sense of BMS as an Intrusive Force. Aspects of these themes reflect areas of reduction in quality of life. Conclusion: BMS damages an individual’s quality of life in ways that have not been reflected in self-report surveys of health-related quality of life. The condition has serious implications for the individual's sense of self, identity, and future. The study recommends that further qualitative research be carried out in this area. Also, the use of therapeutic interventions with sufferers from BMS is recommended, which would help not only sufferers but best practice in relation to their treatment.Keywords: burning mouth syndrome, interpretative phenomenological analysis, qualitative research, quality of life
Procedia PDF Downloads 441397 A TgCNN-Based Surrogate Model for Subsurface Oil-Water Phase Flow under Multi-Well Conditions
Authors: Jian Li
Abstract:
The uncertainty quantification and inversion problems of subsurface oil-water phase flow usually require extensive repeated forward calculations for new runs with changed conditions. To reduce the computational time, various forms of surrogate models have been built. Related research shows that deep learning has emerged as an effective surrogate model, while most surrogate models with deep learning are purely data-driven, which always leads to poor robustness and abnormal results. To guarantee the model more consistent with the physical laws, a coupled theory-guided convolutional neural network (TgCNN) based surrogate model is built to facilitate computation efficiency under the premise of satisfactory accuracy. The model is a convolutional neural network based on multi-well reservoir simulation. The core notion of this proposed method is to bridge two separate blocks on top of an overall network. They underlie the TgCNN model in a coupled form, which reflects the coupling nature of pressure and water saturation in the two-phase flow equation. The model is driven by not only labeled data but also scientific theories, including governing equations, stochastic parameterization, boundary, and initial conditions, well conditions, and expert knowledge. The results show that the TgCNN-based surrogate model exhibits satisfactory accuracy and efficiency in subsurface oil-water phase flow under multi-well conditions.Keywords: coupled theory-guided convolutional neural network, multi-well conditions, surrogate model, subsurface oil-water phase
Procedia PDF Downloads 86396 Established Novel Approach for Chemical Oxygen Demand Concentrations Measurement Based Mach-Zehner Interferometer Sensor
Authors: Su Sin Chong, Abdul Aziz Abdul Raman, Sulaiman Wadi Harun, Hamzah Arof
Abstract:
Chemical Oxygen Demand (COD) plays a vital role determination of an appropriate strategy for wastewater treatment including the control of the quality of an effluent. In this study, a new sensing method was introduced for the first time and developed to investigate chemical oxygen demand (COD) using a Mach-Zehner Interferometer (MZI)-based dye sensor. The sensor is constructed by bridging two single mode fibres (SMF1 and SMF2) with a short section (~20 mm) of multimode fibre (MMF) and was formed by tapering the MMF to generate evanescent field which is sensitive to perturbation of sensing medium. When the COD concentration increase takes effect will induce changes in output intensity and effective refractive index between the microfiber and the sensing medium. The adequacy of decisions based on COD values relies on the quality of the measurements. Therefore, the dual output response can be applied to the analytical procedure enhance measurement quality. This work presents a detailed assessment of the determination of COD values in synthetic wastewaters. Detailed models of the measurement performance, including sensitivity, reversibility, stability, and uncertainty were successfully validated by proficiency tests where supported on sound and objective criteria. Comparison of the standard method with the new proposed method was also conducted. This proposed sensor is compact, reliable and feasible to investigate the COD value.Keywords: chemical oxygen demand, environmental sensing, Mach-Zehnder interferometer sensor, online monitoring
Procedia PDF Downloads 494395 Parameter Identification Analysis in the Design of Rock Fill Dams
Authors: G. Shahzadi, A. Soulaimani
Abstract:
This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS
Procedia PDF Downloads 146394 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy
Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro
Abstract:
Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer
Procedia PDF Downloads 135393 Optimizing Groundwater Pumping for a Complex Groundwater/Surface Water System
Authors: Emery A. Coppola Jr., Suna Cinar, Ferenc Szidarovszky
Abstract:
Over-pumping of groundwater resources is a serious problem world-wide. In addition to depleting this valuable resource, hydraulically connected sensitive ecological resources like wetlands and surface water bodies are often impacted and even destroyed by over-pumping. Effectively managing groundwater in a way that satisfy human demand while preserving natural resources is a daunting challenge that will only worsen with growing human populations and climate change. As presented in this paper, a numerical flow model developed for a hypothetical but realistic groundwater/surface water system was combined with formal optimization. Response coefficients were used in an optimization management model to maximize groundwater pumping in a complex, multi-layered aquifer system while protecting against groundwater over-draft, streamflow depletion, and wetland impacts. Pumping optimization was performed for different constraint sets that reflect different resource protection preferences, yielding significantly different optimal pumping solutions. A sensitivity analysis on the optimal solutions was performed on select response coefficients to identify differences between wet and dry periods. Stochastic optimization was also performed, where uncertainty associated with changing irrigation demand due to changing weather conditions are accounted for. One of the strengths of this optimization approach is that it can efficiently and accurately identify superior management strategies that minimize risk and adverse environmental impacts associated with groundwater pumping under different hydrologic conditions.Keywords: numerical groundwater flow modeling, water management optimization, groundwater overdraft, streamflow depletion
Procedia PDF Downloads 233392 An Interactive Voice Response Storytelling Model for Learning Entrepreneurial Mindsets in Media Dark Zones
Authors: Vineesh Amin, Ananya Agrawal
Abstract:
In a prolonged period of uncertainty and disruptions in the pre-said normal order, non-cognitive skills, especially entrepreneurial mindsets, have become a pillar that can reform the educational models to inform the economy. Dreamverse Learning Lab’s IVR-based storytelling program -Call-a-Kahaani- is an evolving experiment with an aim to kindle entrepreneurial mindsets in the remotest locations of India in an accessible and engaging manner. At the heart of this experiment is the belief that at every phase in our life’s story, we have a choice which brings us closer to achieving our true potential. This interactive program is thus designed using real-time storytelling principles to empower learners, ages 24 and below, to make choices and take decisions as they become more self-aware, practice grit, try new things through stories, guided activities, and interactions, simply over a phone call. This research paper highlights the framework behind an ongoing scalable, data-oriented, low-tech program to kindle entrepreneurial mindsets in media dark zones supported by iterative design and prototyping to reach 13700+ unique learners who made 59000+ calls for 183900+min listening duration to listen to content pieces of around 3 to 4 min, with the last monitored (March 2022) record of 34% serious listenership, within one and a half years of its inception. The paper provides an in-depth account of the technical development, content creation, learning, and assessment frameworks, as well as mobilization models which have been leveraged to build this end-to-end system.Keywords: non-cognitive skills, entrepreneurial mindsets, speech interface, remote learning, storytelling
Procedia PDF Downloads 210391 The Role of Strategic Alliances, Innovation Capability, Cost Reduction in Enhancing Customer Loyalty and Firm’s Competitive Advantage
Authors: Soebowo Musa
Abstract:
Mining industries are known to be very volatile due to their sensitive nature toward changes in the environment, particularly coal mining. Heavy equipment distributors and coal mining contractors are among heavily affected by such volatility. They are facing more uncertainty on the sustainability of the coal mining industry. Strategic alliances and organizational capabilities such as innovation capability have long been seen as ways to stay competitive with a focus more on the strategic alliances partner-to-partner in serving their customers. In today’s rapid change in the environment, a shift in consumer behaviors, and the human-centric business approach, this study looks at the strategic alliance partner-to-customer relationship in both the industrial organization and resource-based theories. This study was conducted based on 250 respondents from the strategic alliances partner-to-customer between heavy equipment distributors and coal mining contractors in Indonesia. This study finds strategic alliances have the highest association toward cost reduction, a proxy of operational efficiency followed by its association toward innovation capability. Further, strategic alliances and innovation capability have a positive relationship with customer loyalty, while innovation capability and customer loyalty have no significant relationships toward the firm’s competitive advantage. This study also indicates that cost reduction is not a condition to develop customer loyalty in the strategic alliance partner-to-customer relationship. It confirms strategic alliances are a strategy that creates a firm’s operational efficiency, innovation capability that develops customer loyalty, and competitive advantage.Keywords: strategic alliance, innovation capability, cost reduction, customer loyalty, competitive advantage
Procedia PDF Downloads 119390 The Effect of Socio-Affective Variables in the Relationship between Organizational Trust and Employee Turnover Intention
Authors: Paula A. Cruise, Carvell McLeary
Abstract:
Employee turnover leads to lowered productivity, decreased morale and work quality, and psychological effects associated with employee separation and replacement. Yet, it remains unknown why talented employees willingly withdraw from organizations. This uncertainty is worsened as studies; a) priorities organizational over individual predictors resulting in restriction in range in turnover measurement; b) focus on actual rather than intended turnover thereby limiting conceptual understanding of the turnover construct and its relationship with other variables and; c) produce inconsistent findings across cultures, contexts and industries despite a clear need for a unified perspective. The current study addressed these gaps by adopting the theory of planned behavior (TPB) framework to examine socio-cognitive factors in organizational trust and individual turnover intentions among bankers and energy employees in Jamaica. In a comparative study of n=369 [nbank= 264; male=57 (22.73%); nenergy =105; male =45 (42.86)], it was hypothesized that organizational trust was a predictor of employee turnover intention, and the effect of individual, group, cognitive and socio-affective variables varied across industry. Findings from structural equation modelling confirmed the hypothesis, with a model of both cognitive and socio-affective variables being a better fit [CMIN (χ2) = 800.067, df = 364, p ≤ .000; CFI = 0.950; RMSEA = 0.057 with 90% C.I. (0.052 - 0.062); PCLOSE = 0.016; PNFI = 0.818 in predicting turnover intention. The findings are discussed in relation to socio-cognitive components of trust models and predicting negative employee behaviors across cultures and industries.Keywords: context-specific organizational trust, cross-cultural psychology, theory of planned behavior, employee turnover intention
Procedia PDF Downloads 248389 Supply Chain Optimisation through Geographical Network Modeling
Authors: Cyrillus Prabandana
Abstract:
Supply chain optimisation requires multiple factors as consideration or constraints. These factors are including but not limited to demand forecasting, raw material fulfilment, production capacity, inventory level, facilities locations, transportation means, and manpower availability. By knowing all manageable factors involved and assuming the uncertainty with pre-defined percentage factors, an integrated supply chain model could be developed to manage various business scenarios. This paper analyse the utilisation of geographical point of view to develop an integrated supply chain network model to optimise the distribution of finished product appropriately according to forecasted demand and available supply. The supply chain optimisation model shows that small change in one supply chain constraint is possible to largely impact other constraints, and the new information from the model should be able to support the decision making process. The model was focused on three areas, i.e. raw material fulfilment, production capacity and finished products transportation. To validate the model suitability, it was implemented in a project aimed to optimise the concrete supply chain in a mining location. The high level of operations complexity and involvement of multiple stakeholders in the concrete supply chain is believed to be sufficient to give the illustration of the larger scope. The implementation of this geographical supply chain network modeling resulted an optimised concrete supply chain from raw material fulfilment until finished products distribution to each customer, which indicated by lower percentage of missed concrete order fulfilment to customer.Keywords: decision making, geographical supply chain modeling, supply chain optimisation, supply chain
Procedia PDF Downloads 346388 A Bayesian Parameter Identification Method for Thermorheological Complex Materials
Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider
Abstract:
Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex
Procedia PDF Downloads 263387 Taleb's Complexity Theory Concept of 'Antifragility' Has a Significant Contribution to Make to Positive Psychology as Applied to Wellbeing
Authors: Claudius Peter Van Wyk
Abstract:
Given the increasingly manifest phenomena, as described in complexity theory, of volatility, uncertainty, complexity and ambiguity (VUCA), Taleb's notion of 'antifragility, has a significant contribution to make to positive psychology applied to wellbeing. Antifragility is argued to be fundamentally different from the concepts of resiliency; as the ability to recover from failure, and robustness; as the ability to resist failure. Rather it describes the capacity to reorganise in the face of stress in such a way as to cope more effectively with systemic challenges. The concept, which has been applied in disciplines ranging from physics, molecular biology, planning, engineering, and computer science, can now be considered for its application in individual human and social wellbeing. There are strong correlations to Antonovsky's model of 'salutogenesis' in which an attitude and competencies are developed of transforming burdening factors into greater resourcefulness. We demonstrate, from the perspective of neuroscience, how technology measuring nervous system coherence can be coupled to acquired psychodynamic approaches to not only identify contextual stressors, utilise biofeedback instruments for facilitating greater coherence, but apply these insights to specific life stressors that compromise well-being. Employing an on-going case study with BMW South Africa, the neurological mapping is demonstrated together with 'reframing' and emotional anchoring techniques from neurolinguistic programming. The argument is contextualised in the discipline of psychoneuroimmunology which describes the stress pathways from the CNS and endocrine systems and their impact on immune function and the capacity to restore homeostasis.Keywords: antifragility, complexity, neuroscience, psychoneuroimmunology, salutogenesis, volatility
Procedia PDF Downloads 376386 Carbon Dioxide (CO₂) and Methane (CH₄) Fluxes from Irrigated Wheat in a Subtropical Floodplain Soil Increased by Reduced Tillage, Residue Retention, and Nitrogen Application Rate
Authors: R. Begum, M. M. R. Jahangir, M. Jahiruddin, M. R. Islam, M. M. Rahman, M. B. Hossain, P. Hossain
Abstract:
Quantifying carbon (C) sequestration in soils is necessary to help better understand the effect of agricultural practices on the C cycle. The estimated contribution of agricultural carbon dioxide (CO₂) and methane (CH₄) to global warming potential (GWP) has a wide range. The underlying causes of this huge uncertainty are the difficulties to predict the regional CO₂ and CH₄ loss due to the lack of experimental evidence on CO₂ and CH₄ emissions and associated drivers. The CH₄ and CO₂ emissions were measured in irrigated wheat in subtropical floodplain soils which have been under two soil disturbance levels (strip vs. conventional tillage; ST vs. CT being both with 30% residue retention) and three N fertilizer rates (60, 100, and 140% of the recommended N fertilizer dose, RD) in annual wheat (Triticum aestivum)-mungbean (Vigna radiata)-rice (Oryza sativa L) for seven consecutive years. The highest CH₄ and CO₂ emission peak was observed on day 3 after urea application in both tillages except CO₂ flux in CT. Nitrogen fertilizer application rate significantly influenced mean and cumulative CH₄ and CO₂ fluxes. The CH₄ and CO₂ fluxes decreased in an optimum dose of N fertilizer except for ST for CH₄. The CO₂ emission significantly showed higher emission at minimum (60% of RD) fertilizer application at both tillages. Soil microbial biomass carbon (MBC), organic carbon (SOC), Particulate organic carbon (POC), permanganate oxidisable carbon (POXC), basal respiration (BR) were significantly higher in ST which were negative and significantly correlated with CO₂. However, POC and POXC were positively and significantly correlated with CH₄ emission.Keywords: carbon dioxide emissions, methane emission, nitrogen rate, tillage
Procedia PDF Downloads 116385 'Antibody Exception' under Dispute and Waning Usage: Potential Influence on Patenting Antibodies
Authors: Xiangjun Kong, Dongning Yao, Yuanjia Hu
Abstract:
Therapeutic antibodies have become the most valuable and successful class of biopharmaceutical drugs, with a huge market potential and therapeutic advantages. Antibody patents are, accordingly, extremely important. As the technological limitation of the early stage of this field, the U. S. Patent and Trademark Offices (USPTO) have issued guidelines that suggest an exception for patents claiming a genus of antibodies that bind to a novel antigen, even in the absence of any experimental antibody production. This 'antibody exception' allowed for a broad scope on antibody claims, and led a global trend to patent antibodies without antibodies. Disputes around the pertinent patentability and written description issues remain particularly intense. Yet the validity of such patents had not been overtly challenged until Centocor v. Abbott, which restricted the broad scope of antibody patents and hit the brakes on the 'antibody exception'. The courts tend to uphold the requirement for adequate description of antibodies in the patent specifications, to avoid overreaching antibody claims. Patents following the 'antibody exception' are at risk of being found invalid for inadequately describing what they have claimed. However, the relation between the court and USPTO guidelines remains obscure, and the waning of the 'antibody exception' has led to further disputes around antibody patents. This uncertainty clearly affects patent applications, antibody innovations, and even relevant business performance. This study will give an overview of the emergence, debate, and waning usage of the 'antibody exception' in a number of enlightening cases, attempting to understand the specific concerns and the potential influence of antibody patents. We will then provide some possible strategies for antibody patenting, under the current considerations on the 'antibody exception'.Keywords: antibody exception, antibody patent, USPTO (U. S. Patent and Trademark Offices) guidelines, written description requirement
Procedia PDF Downloads 159384 The application of Gel Dosimeters and Comparison with other Dosimeters in Radiotherapy: A Literature Review
Authors: Sujan Mahamud
Abstract:
Purpose: A major challenge in radiotherapy treatment is to deliver precise dose of radiation to the tumor with minimum dose to the healthy normal tissues. Recently, gel dosimetry has emerged as a powerful tool to measure three-dimensional (3D) dose distribution for complex delivery verification and quality assurance. These dosimeters act both as a phantom and detector, thus confirming the versatility of dosimetry technique. The aim of the study is to know the application of Gel Dosimeters in Radiotherapy and find out the comparison with 1D and 2D dimensional dosimeters. Methods and Materials: The study is carried out from Gel Dosimeter literatures. Secondary data and images have been collected from different sources such as different guidelines, books, and internet, etc. Result: Analyzing, verifying, and comparing data from treatment planning system (TPS) is determined that gel dosimeter is a very excellent powerful tool to measure three-dimensional (3D) dose distribution. The TPS calculated data were in very good agreement with the dose distribution measured by the ferrous gel. The overall uncertainty in the ferrous-gel dose determination was considerably reduced using an optimized MRI acquisition protocol and a new MRI scanner. The method developed for comparing measuring gel data with calculated treatment plans, the gel dosimetry method, was proven to be a useful for radiation treatment planning verification. In 1D and 2D Film, the depth dose and lateral for RMSD are 1.8% and 2%, and max (Di-Dj) are 2.5% and 8%. Other side 2D+ ( 3D) Film Gel and Plan Gel for RMSDstruct and RMSDstoch are 2.3% & 3.6% and 1% & 1% and system deviation are -0.6% and 2.5%. The study is investigated that the result fined 2D+ (3D) Film Dosimeter is better than the 1D and 2D Dosimeter. Discussion: Gel Dosimeters is quality control and quality assurance tool which will used the future clinical application.Keywords: gel dosimeters, phantom, rmsd, QC, detector
Procedia PDF Downloads 152383 Modeling of Sediment Yield and Streamflow of Watershed Basin in the Philippines Using the Soil Water Assessment Tool Model for Watershed Sustainability
Authors: Warda L. Panondi, Norihiro Izumi
Abstract:
Sedimentation is a significant threat to the sustainability of reservoirs and their watershed. In the Philippines, the Pulangi watershed experienced a high sediment loss mainly due to land conversions and plantations that showed critical erosion rates beyond the tolerable limit of -10 ton/ha/yr in all of its sub-basin. From this event, the prediction of runoff volume and sediment yield is essential to examine using the country's soil conservation techniques realistically. In this research, the Pulangi watershed was modeled using the soil water assessment tool (SWAT) to predict its watershed basin's annual runoff and sediment yield. For the calibration and validation of the model, the SWAT-CUP was utilized. The model was calibrated with monthly discharge data for 1990-1993 and validated for 1994-1997. Simultaneously, the sediment yield was calibrated in 2014 and validated in 2015 because of limited observed datasets. Uncertainty analysis and calculation of efficiency indexes were accomplished through the SUFI-2 algorithm. According to the coefficient of determination (R2), Nash Sutcliffe efficiency (NSE), King-Gupta efficiency (KGE), and PBIAS, the calculation of streamflow indicates a good performance for both calibration and validation periods while the sediment yield resulted in a satisfactory performance for both calibration and validation. Therefore, this study was able to identify the most critical sub-basin and severe needs of soil conservation. Furthermore, this study will provide baseline information to prevent floods and landslides and serve as a useful reference for land-use policies and watershed management and sustainability in the Pulangi watershed.Keywords: Pulangi watershed, sediment yield, streamflow, SWAT model
Procedia PDF Downloads 210382 The Domino Principle of Dobbs v Jackson Women’s Health Organization: The Gays Are Next!
Authors: Alan Berman, Mark Brady
Abstract:
The phenomenon of homophobia and transphobia in the United States detrimentally impacts the health, wellbeing, and dignity of school students who identify with the LGBTQ+ community. These negative impacts also compromise the participation of LGBTQ+ individuals in the wider life of educational domains and endanger the potential economic, social and cultural contribution this community can make to American society. The recent 6:3 majority decision of the US Supreme Court in Dobbs v Jackson Women’s Health Organization expressly overruled the 1973 decision in Roe v Wade and the 1992 Planned Parenthood v Casey decision. This study will canvass the bases upon which the court in Dobbs overruled longstanding precedent established in Roe and Casey. It will examine the potential implications for the LGBTQ community of the result in Dobbs. The potential far-reaching consequences of this case are foreshadowed in a concurring opinion by Justice Clarence Thomas, suggesting the Court should revisit all substantive due process cases. This includes notably the Lawrence v Texas case (invalidating sodomy laws criminalizing same-sex relations) and the Obergefellcase (upholding same-sex marriage). Finally, the study will examine the likely impact of the uncertainty brought about by the decision in Doddsfor LGBTQ students in US educational institutions. The actions of several states post-Dobbs, reflects and exacerbates the problems facing LGBTQ+ students and uncovers and highlights societal homophobia and transphobia.Keywords: human rights, LGBT rights, right to personal dignity and autonomy, substantive due process rights
Procedia PDF Downloads 104381 Comprehensive Validation of High-Performance Liquid Chromatography-Diode Array Detection (HPLC-DAD) for Quantitative Assessment of Caffeic Acid in Phenolic Extracts from Olive Mill Wastewater
Authors: Layla El Gaini, Majdouline Belaqziz, Meriem Outaki, Mariam Minhaj
Abstract:
In this study, it introduce and validate a high-performance liquid chromatography method with diode-array detection (HPLC-DAD) specifically designed for the accurate quantification of caffeic acid in phenolic extracts obtained from olive mill wastewater. The separation process of caffeic acid was effectively achieved through the use of an Acclaim Polar Advantage column (5µm, 250x4.6mm). A meticulous multi-step gradient mobile phase was employed, comprising water acidified with phosphoric acid (pH 2.3) and acetonitrile, to ensure optimal separation. The diode-array detection was adeptly conducted within the UV–VIS spectrum, spanning a range of 200–800 nm, which facilitated precise analytical results. The method underwent comprehensive validation, addressing several essential analytical parameters, including specificity, repeatability, linearity, as well as the limits of detection and quantification, alongside measurement uncertainty. The generated linear standard curves displayed high correlation coefficients, underscoring the method's efficacy and consistency. This validated approach is not only robust but also demonstrates exceptional reliability for the focused analysis of caffeic acid within the intricate matrices of wastewater, thus offering significant potential for applications in environmental and analytical chemistry.Keywords: high-performance liquid chromatography (HPLC-DAD), caffeic acid analysis, olive mill wastewater phenolics, analytical method validation
Procedia PDF Downloads 70380 Agrarian Distress and out Migration of Youths: Study of a Wet Land Village in Hirakud Command Area, Odisha
Authors: Kishor K. Podh
Abstract:
Agriculture in India treated as the backbone of its economy. It has been accommodated to more than 60 percent of its population as their economic base, directly or indirectly for their livelihood. Besides its significant role, the sharp declines in public investment and development in agriculture have witnessed. After independence Hirakud Command Area (HCA) popularly known as the Rice Bowl of State, due to its fabulous production and provides food to a larger part of the state. After the great green revolution and then liberalization agrarian families become overburden with the loan. They started working as wage laborer in other’s field and non-farm sectors to overcome from the uninvited indebtedness. Although production increases at present, still the youths of this area migrating outsides for job Tamil Nadu, Andhra Pradesh, Maharashtra, Gujarat, etc. Because agriculture no longer remains a profitable occupation; increasing input costs, the uncertainty of crops, improper pricing, poor marketing, etc. compels the youths to choose the alternative occupations. They work in industries (under contractors), construction workers and other menial jobs due to lack of skills and degrees. Kharmunda a village within HCA selected as per the convenience and 100 youth migrants were interviewed purposively selected who were present during data collection. The study analyses the types of migration; its similarity/differentiations, its determining factors, in tow geographical areas of Western Odisha, i.e., single crop and double crops in relation to agricultural situations.Keywords: agrarian distress, double crops, Hirakud Command Area, indebtedness, out migration, Western Odisha
Procedia PDF Downloads 335379 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos
Abstract:
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function
Procedia PDF Downloads 309378 An Investigation of the Relationship between Organizational Culture and Innovation Type: A Mixed Method Study Using the OCAI in a Telecommunication Company in Saudi Arabia
Authors: A. Almubrad, R. Clouse, A. Aljlaoud
Abstract:
Organizational culture (OC) is recognized to have an influence on the propensity of organizations to innovate. It is also presumed that it may impede the innovation process from thriving within the organization. Investigating the role organizational culture plays in enabling or inhibiting innovation merits exploration to investigate organizational cultural attributes necessary to reach innovation goals. This study aims to investigate a preliminary matching heuristic of OC attributes to the type of innovation that has the potential to thrive within those attributes. A mixed methods research approach was adopted to achieve the research aims. Accordingly, participants from a national telecom company in Saudi Arabia took the Organizational Culture Assessment Instrument (OCAI). A further sample selected from the respondents’ pool holding the role of managing directors was interviewed in the qualitative phase. Our study findings reveal that the market culture type has a tendency to adopt radical innovations to disrupt the market and to preserve its market position. In contrast, we find that the adhocracy culture type tends to adopt the incremental innovation type and found this tends to be more convenient for employees due to its low levels of uncertainty. Our results are an encouraging indication that matching organizational culture attributes to the type of innovation aids in innovation management. This study carries limitations while drawing its findings from a limited sample of OC attributes that identify with the adhocracy and market culture types. An extended investigation is merited to explore other types of organizational cultures and their optimal innovation types.Keywords: incremental innovation, radical innovation, organization culture, market culture, adhocracy culture, OACI
Procedia PDF Downloads 105377 Seismic Loss Assessment for Peruvian University Buildings with Simulated Fragility Functions
Authors: Jose Ruiz, Jose Velasquez, Holger Lovon
Abstract:
Peruvian university buildings are critical structures for which very little research about its seismic vulnerability is available. This paper develops a probabilistic methodology that predicts seismic loss for university buildings with simulated fragility functions. Two university buildings located in the city of Cusco were analyzed. Fragility functions were developed considering seismic and structural parameters uncertainty. The fragility functions were generated with the Latin Hypercube technique, an improved Montecarlo-based method, which optimizes the sampling of structural parameters and provides at least 100 reliable samples for every level of seismic demand. Concrete compressive strength, maximum concrete strain and yield stress of the reinforcing steel were considered as the key structural parameters. The seismic demand is defined by synthetic records which are compatible with the elastic Peruvian design spectrum. Acceleration records are scaled based on the peak ground acceleration on rigid soil (PGA) which goes from 0.05g to 1.00g. A total of 2000 structural models were considered to account for both structural and seismic variability. These functions represent the overall building behavior because they give rational information regarding damage ratios for defined levels of seismic demand. The university buildings show an expected Mean Damage Factor of 8.80% and 19.05%, respectively, for the 0.22g-PGA scenario, which was amplified by the soil type coefficient and resulted in 0.26g-PGA. These ratios were computed considering a seismic demand related to 10% of probability of exceedance in 50 years which is a requirement in the Peruvian seismic code. These results show an acceptable seismic performance for both buildings.Keywords: fragility functions, university buildings, loss assessment, Montecarlo simulation, latin hypercube
Procedia PDF Downloads 144