Search results for: aleatory uncertainty
199 Evidence of a Negativity Bias in the Keywords of Scientific Papers
Authors: Kseniia Zviagintseva, Brett Buttliere
Abstract:
Science is fundamentally a problem-solving enterprise, and scientists pay more attention to the negative things, that cause them dissonance and negative affective state of uncertainty or contradiction. While this is agreed upon by philosophers of science, there are few empirical demonstrations. Here we examine the keywords from those papers published by PLoS in 2014 and show with several sentiment analyzers that negative keywords are studied more than positive keywords. Our dataset is the 927,406 keywords of 32,870 scientific articles in all fields published in 2014 by the journal PLOS ONE (collected from Altmetric.com). Counting how often the 47,415 unique keywords are used, we can examine whether those negative topics are studied more than positive. In order to find the sentiment of the keywords, we utilized two sentiment analysis tools, Hu and Liu (2004) and SentiStrength (2014). The results below are for Hu and Liu as these are the less convincing results. The average keyword was utilized 19.56 times, with half of the keywords being utilized only 1 time and the maximum number of uses being 18,589 times. The keywords identified as negative were utilized 37.39 times, on average, with the positive keywords being utilized 14.72 times and the neutral keywords - 19.29, on average. This difference is only marginally significant, with an F value of 2.82, with a p of .05, but one must keep in mind that more than half of the keywords are utilized only 1 time, artificially increasing the variance and driving the effect size down. To examine more closely, we looked at those top 25 most utilized keywords that have a sentiment. Among the top 25, there are only two positive words, ‘care’ and ‘dynamics’, in position numbers 5 and 13 respectively, with all the rest being identified as negative. ‘Diseases’ is the most studied keyword with 8,790 uses, with ‘cancer’ and ‘infectious’ being the second and fourth most utilized sentiment-laden keywords. The sentiment analysis is not perfect though, as the words ‘diseases’ and ‘disease’ are split by taking 1st and 3rd positions. Combining them, they remain as the most common sentiment-laden keyword, being utilized 13,236 times. More than just splitting the words, the sentiment analyzer logs ‘regression’ and ‘rat’ as negative, and these should probably be considered false positives. Despite these potential problems, the effect is apparent, as even the positive keywords like ‘care’ could or should be considered negative, since this word is most commonly utilized as a part of ‘health care’, ‘critical care’ or ‘quality of care’ and generally associated with how to improve it. All in all, the results suggest that negative concepts are studied more, also providing support for the notion that science is most generally a problem-solving enterprise. The results also provide evidence that negativity and contradiction are related to greater productivity and positive outcomes.Keywords: bibliometrics, keywords analysis, negativity bias, positive and negative words, scientific papers, scientometrics
Procedia PDF Downloads 186198 Threshold Sand Detection Limits for Acoustic Monitors in Multiphase Flow
Authors: Vinod Ponnagandla, Brenton McLaury, Siamack Shirazi
Abstract:
Sand production can lead to deposition of particles or erosion. Low production rates resulting in deposition can partially clog systems and cause under deposit corrosion. Commercially available nonintrusive acoustic sand detectors are attractive as they claim to detect sand production. Acoustic sand detectors are used during oil and gas production; however, operators often do not know the threshold detection limits of these devices. It is imperative to know the detection limits to appropriately plan for cleaning of separation equipment or examine risk of erosion. These monitors are based on detecting the acoustic signature of sand as the particles impact the pipe walls. The objective of this work is to determine threshold detection limits for acoustic sand monitors that are commercially available. The minimum threshold sand concentration that can be detected in a pipe are determined as a function of flowing gas and liquid velocities. A large scale flow loop with a 4-inch test section is utilized. Commercially available sand monitors (ClampOn and Roxar) are evaluated for different flow regimes, sand sizes and pipe orientation (vertical and horizontal). The manufacturers’ recommend that the monitors be placed on a bend to maximize the number of particle impacts, so results are shown for monitors placed at 45 and 90 degree positions in a bend. Acoustic sand monitors that clamp to the outside of pipe are passive and listen for solid particle impact noise. The threshold sand rate is calculated by eliminating the background noise created by the flow of gas and liquid in the pipe for various flow regimes that are generated in horizontal and vertical test sections. The average sand sizes examined are 150 and 300 microns. For stratified and bubbly flows the threshold sand rates are much higher than other flow regimes such as slug and annular flow regimes that are investigated. However, the background noise generated by slug flow regime is very high and cause a high uncertainty in detection limits. The threshold sand rates for annular flow and dry gas conditions are the lowest because of high gas velocities. The effects of monitor placement around elbows that are in vertical and horizontal pipes are also examined for 150 micron. The results show that the threshold sand rates that are detected in vertical orientation are generally lower for all various flow regimes that are investigated.Keywords: acoustic monitor, sand, multiphase flow, threshold
Procedia PDF Downloads 407197 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia
Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah
Abstract:
Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.Keywords: overview of porosity classification, reservoir characterization, microporosity, carbonate reservoir
Procedia PDF Downloads 154196 The Impact of Supply Chain Strategy and Integration on Supply Chain Performance: Supply Chain Vulnerability as a Moderator
Authors: Yi-Chun Kuo, Jo-Chieh Lin
Abstract:
The objective of a supply chain strategy is to reduce waste and increase efficiency to attain cost benefits, and to guarantee supply chain flexibility when facing the ever-changing market environment in order to meet customer requirements. Strategy implementation aims to fulfill common goals and attain benefits by integrating upstream and downstream enterprises, sharing information, conducting common planning, and taking part in decision making, so as to enhance the overall performance of the supply chain. With the rise of outsourcing and globalization, the increasing dependence on suppliers and customers and the rapid development of information technology, the complexity and uncertainty of the supply chain have intensified, and supply chain vulnerability has surged, resulting in adverse effects on supply chain performance. Thus, this study aims to use supply chain vulnerability as a moderating variable and apply structural equation modeling (SEM) to determine the relationships among supply chain strategy, supply chain integration, and supply chain performance, as well as the moderating effect of supply chain vulnerability on supply chain performance. The data investigation of this study was questionnaires which were collected from the management level of enterprises in Taiwan and China, 149 questionnaires were received. The result of confirmatory factor analysis shows that the path coefficients of supply chain strategy on supply chain integration and supply chain performance are positive (0.497, t= 4.914; 0.748, t= 5.919), having a significantly positive effect. Supply chain integration is also significantly positively correlated to supply chain performance (0.192, t = 2.273). The moderating effects of supply chain vulnerability on supply chain strategy and supply chain integration to supply chain performance are significant (7.407; 4.687). In Taiwan, 97.73% of enterprises are small- and medium-sized enterprises (SMEs) focusing on receiving original equipment manufacturer (OEM) and original design manufacturer (ODM) orders. In order to meet the needs of customers and to respond to market changes, these enterprises especially focus on supply chain flexibility and their integration with the upstream and downstream enterprises. According to the observation of this research, the effect of supply chain vulnerability on supply chain performance is significant, and so enterprises need to attach great importance to the management of supply chain risk and conduct risk analysis on their suppliers in order to formulate response strategies when facing emergency situations. At the same time, risk management is incorporated into the supply chain so as to reduce the effect of supply chain vulnerability on the overall supply chain performance.Keywords: supply chain integration, supply chain performance, supply chain vulnerability, structural equation modeling
Procedia PDF Downloads 316195 Energy Storage Modelling for Power System Reliability and Environmental Compliance
Authors: Rajesh Karki, Safal Bhattarai, Saket Adhikari
Abstract:
Reliable and economic operation of power systems are becoming extremely challenging with large scale integration of renewable energy sources due to the intermittency and uncertainty associated with renewable power generation. It is, therefore, important to make a quantitative risk assessment and explore the potential resources to mitigate such risks. Probabilistic models for different energy storage systems (ESS), such as the flywheel energy storage system (FESS) and the compressed air energy storage (CAES) incorporating specific charge/discharge performance and failure characteristics suitable for probabilistic risk assessment in power system operation and planning are presented in this paper. The proposed methodology used in FESS modelling offers flexibility to accommodate different configurations of plant topology. It is perceived that CAES has a high potential for grid-scale application, and a hybrid approach is proposed, which embeds a Monte-Carlo simulation (MCS) method in an analytical technique to develop a suitable reliability model of the CAES. The proposed ESS models are applied to a test system to investigate the economic and reliability benefits of the energy storage technologies in system operation and planning, as well as to assess their contributions in facilitating wind integration during different operating scenarios. A comparative study considering various storage system topologies are also presented. The impacts of failure rates of the critical components of ESS on the expected state of charge (SOC) and the performance of the different types of ESS during operation are illustrated with selected studies on the test system. The paper also applies the proposed models on the test system to investigate the economic and reliability benefits of the different ESS technologies and to evaluate their contributions in facilitating wind integration during different operating scenarios and system configurations. The conclusions drawn from the study results provide valuable information to help policymakers, system planners, and operators in arriving at effective and efficient policies, investment decisions, and operating strategies for planning and operation of power systems with large penetrations of renewable energy sources.Keywords: flywheel energy storage, compressed air energy storage, power system reliability, renewable energy, system planning, system operation
Procedia PDF Downloads 131194 Intellectual Property Rights Reforms and the Quality of Exported Goods
Authors: Gideon Ndubuisi
Abstract:
It is widely acknowledged that the quality of a country’s export matters more decisively than the quantity it exports. Hence, understanding the drivers of exported goods’ quality is a relevant policy question. Among other things, product quality upgrading is a considerable cost uncertainty venture that can be undertaken by an entrepreneur. Once a product is successfully upgraded, however, others can imitate the product, and hence, the returns to the pioneer entrepreneur are socialized. Along with this line, a government policy such as intellectual property rights (IPRs) protection which lessens the non-appropriability problem and incentivizes cost discovery investments becomes both a panacea in addressing the market failure and a sine qua non for an entrepreneur to engage in product quality upgrading. In addendum, product quality upgrading involves complex tasks which often require a lot of knowledge and technology sharing beyond the bounds of the firm thereby creating rooms for knowledge spillovers and imitations. Without an institution that protects upstream suppliers of knowledge and technology, technology masking occurs which bids up marginal production cost and product quality fall. Despite these clear associations between IPRs and product quality upgrading, the surging literature on the drivers of the quality of exported goods has proceeded almost in isolation of IPRs protection as a determinant. Consequently, the current study uses a difference-in-difference method to evaluate the effects of IPRs reforms on the quality of exported goods in 16 developing countries over the sample periods of 1984-2000. The study finds weak evidence that IPRs reforms increase the quality of all exported goods. When the industries are sorted into high and low-patent sensitive industries, however, we find strong indicative evidence that IPRs reform increases the quality of exported goods in high-patent sensitive sectors both in absolute terms and relative to the low-patent sensitive sectors in the post-reform period. We also obtain strong indicative evidence that it brought the quality of exported goods in the high-patent sensitive sectors closer to the quality frontier. Accounting for time-duration effects, these observed effects grow over time. The results are also largely consistent when we consider the sophistication and complexity of exported goods rather than just quality upgrades.Keywords: exports, export quality, export sophistication, intellectual property rights
Procedia PDF Downloads 125193 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 169192 The Role of Dialogue in Shared Leadership and Team Innovative Behavior Relationship
Authors: Ander Pomposo
Abstract:
Purpose: The aim of this study was to investigate the impact that dialogue has on the relationship between shared leadership and innovative behavior and the importance of dialogue in innovation. This study wants to contribute to the literature by providing theorists and researchers a better understanding of how to move forward in the studies of moderator variables in the relationship between shared leadership and team outcomes such as innovation. Methodology: A systematic review of the literature, originally adopted from the medical sciences but also used in management and leadership studies, was conducted to synthesize research in a systematic, transparent and reproducible manner. A final sample of 48 empirical studies was scientifically synthesized. Findings: Shared leadership gives a better solution to team management challenges and goes beyond the classical, hierarchical, or vertical leadership models based on the individual leader approach. One of the outcomes that emerge from shared leadership is team innovative behavior. To intensify the relationship between shared leadership and team innovative behavior, and understand when is more effective, the moderating effects of other variables in this relationship should be examined. This synthesis of the empirical studies revealed that dialogue is a moderator variable that has an impact on the relationship between shared leadership and team innovative behavior when leadership is understood as a relational process. Dialogue is an activity between at least two speech partners trying to fulfill a collective goal and is a way of living open to people and ideas through interaction. Dialogue is productive when team members engage relationally with one another. When this happens, participants are more likely to take responsibility for the tasks they are involved and for the relationships they have with others. In this relational engagement, participants are likely to establish high-quality connections with a high degree of generativity. This study suggests that organizations should facilitate the dialogue of team members in shared leadership which has a positive impact on innovation and offers a more adaptive framework for the leadership that is needed in teams working in complex work tasks. These results uncover the necessity of more research on the role that dialogue plays in contributing to important organizational outcomes such as innovation. Case studies describing both best practices and obstacles of dialogue in team innovative behavior are necessary to gain a more detailed insight into the field. It will be interesting to see how all these fields of research evolve and are implemented in dialogue practices in the organizations that use team-based structures to deal with uncertainty, fast-changing environments, globalization and increasingly complex work.Keywords: dialogue, innovation, leadership, shared leadership, team innovative behavior
Procedia PDF Downloads 182191 Positive Disruption: Towards a Definition of Artist-in-Residence Impact on Organisational Creativity
Authors: Denise Bianco
Abstract:
Several studies on innovation and creativity in organisations emphasise the need to expand horizons and take on alternative and unexpected views to produce something new. This paper theorises the potential impact artists can have as creative catalysts, working embedded in non-artistic organisations. It begins from an understanding that in today's ever-changing scenario, organisations are increasingly seeking to open up new creative thinking through deviant behaviours to produce innovation and that art residencies need to be critically revised in this specific context in light of their disruptive potential. On the one hand, this paper builds upon recent contributions made on workplace creativity and related concepts of deviance and disruption. Research suggests that creativity is likely to be lower in work contexts where utter conformity is a cardinal value and higher in work contexts that show some tolerance for uncertainty and deviance. On the other hand, this paper draws attention to Artist-in-Residence as a vehicle for epistemic friction between divergent and convergent thinking, which allows the creation of unparalleled ways of knowing in the dailiness of situated and contextualised social processes. In order to do so, this contribution brings together insights from the most relevant theories on organisational creativity and unconventional agile methods such as Art Thinking and direct insights from ethnographic fieldwork in the context of embedded art residencies within work organisations to propose a redefinition of Artist-in-Residence and their potential impact on organisational creativity. The result is a re-definition of embedded Artist-in-Residence in organisational settings from a more comprehensive, multi-disciplinary, and relational perspective that builds on three focal points. First the notion that organisational creativity is a dynamic and synergistic process throughout which an idea is framed by recurrent activities subjected to multiple influences. Second, the definition of embedded Artist-in-Residence as an assemblage of dynamic, productive relations and unexpected possibilities for new networks of relationality that encourage the recombination of knowledge. Third, and most importantly, the acknowledgment that embedded residencies are, at the very essence, bi-cultural knowledge contexts where creativity flourishes as the result of open-to-change processes that are highly relational, constantly negotiated, and contextualised in time and space.Keywords: artist-in-residence, convergent and divergent thinking, creativity, creative friction, deviance and creativity
Procedia PDF Downloads 97190 The Effect of Innovation Capability and Activity, and Wider Sector Condition on the Performance of Malaysian Public Sector Innovation Policy
Authors: Razul Ikmal Ramli
Abstract:
Successful implementation of innovation is a key success formula of a great organization. Innovation will ensure competitive advantages as well as sustainability of organization in the long run. In public sector context, the role of innovation is crucial to resolve dynamic challenges of public services such as operating in economic uncertainty with limited resources, increasing operating expenditure and growing expectation among citizens towards high quality, swift and reliable public services. Acknowledging the prospect of innovation as a tool for achieving high-performance public sector, the Malaysian New Economic Model launched in the year 2011 intensified government commitment to foster innovation in the public sector. Since 2011 various initiatives have been implemented, however little is known about the performance of public sector innovation in Malaysia. Hence, by applying the national innovation system theory as a pillar, the formulated research objectives were focused on measuring the level of innovation capabilities, wider public sector condition for innovation, innovation activity, and innovation performance as well as to examine the relationship between the four constructs with innovation performance as a dependent variable. For that purpose, 1,000 sets of self-administrated survey questionnaires were distributed to heads of units and divisions of 22 Federal Ministry and Central Agencies in the administrative, security, social and economic sector. Based on 456 returned questionnaires, the descriptive analysis found that innovation capabilities, wider sector condition, innovation activities and innovation performance were rated by respondents at moderately high level. Based on Structural Equation Modelling, innovation performance was found to be influenced by innovation capability, wider sector condition for innovation and innovation activity. In addition, the analysis also found innovation activity to be the most important construct that influences innovation performance. The implication of the study concluded that the innovation policy implemented in the public sector of Malaysia sparked motivation to innovate and resulted in various forms of innovation. However, the overall achievements were not as well as they were expected to be. Thus, the study suggested for the formulation of a dedicated policy to strengthen innovation capability, wider public sector condition for innovation and innovation activity of the Malaysian public sector. Furthermore, strategic intervention needs to be focused on innovation activity as the construct plays an important role in determining the innovation performance. The success of public sector innovation implementation will not only benefit the citizens, but will also spearhead the competitiveness and sustainability of the country.Keywords: public sector, innovation, performance, innovation policy
Procedia PDF Downloads 280189 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study
Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero
Abstract:
Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries
Procedia PDF Downloads 618188 A Case Study of the Saudi Arabian Investment Regime
Authors: Atif Alenezi
Abstract:
The low global oil price poses economic challenges for Saudi Arabia, as oil revenues still make up a great percentage of its Gross Domestic Product (GDP). At the end of 2014, the Consultative Assembly considered a report from the Committee on Economic Affairs and Energy which highlights that the economy had not been successfully diversified. There thus exist ample reasons for modernising the Foreign Direct Investment (FDI) regime, primarily to achieve and maintain prosperity and facilitate peace in the region. Therefore, this paper aims at identifying specific problems with the existing FDI regime in Saudi Arabia and subsequently some solutions to those problems. Saudi Arabia adopted its first specific legislation in 1956, which imposed significant restrictions on foreign ownership. Since then, Saudi Arabia has modernised its FDI framework with the passing of the Foreign Capital Investment Act 1979 and the Foreign Investment Law2000 and the accompanying Executive Rules 2000 and the recently adopted Implementing Regulations 2014.Nonetheless, the legislative provisions contain various gaps and the failure to address these gaps creates risks and uncertainty for investors. For instance, the important topic of mergers and acquisitions has not been addressed in the Foreign Investment Law 2000. The circumstances in which expropriation can be considered to be in the public interest have not been defined. Moreover, Saudi Arabia has not entered into many bilateral investment treaties (BITs). This has an effect on the investment climate, as foreign investors are not afforded typical rights. An analysis of the BITs which have been entered into reveals that the national treatment standard and stabilisation, umbrella or renegotiation provisions have not been included. This is problematic since the 2000 Act does not spell out the applicable standard in accordance with which foreign investors should be treated. Moreover, the most-favoured-nation (MFN) or fair and equitable treatment (FET) standards have not been put on a statutory footing. Whilst the Arbitration Act 2012 permits that investment disputes can be internationalised, restrictions have been retained. The effectiveness of international arbitration is further undermined because Saudi Arabia does not enforce non-domestic arbitral awards which contravene public policy. Furthermore, the reservation to the Convention on the Settlement of Investment Disputes allows Saudi Arabia to exclude petroleum and sovereign disputes. Interviews with foreign investors, who operate in Saudi Arabia highlight additional issues. Saudi Arabia ought not to procrastinate far-reaching structural reforms.Keywords: FDI, Saudi, BITs, law
Procedia PDF Downloads 409187 Electron Density Discrepancy Analysis of Energy Metabolism Coenzymes
Authors: Alan Luo, Hunter N. B. Moseley
Abstract:
Many macromolecular structure entries in the Protein Data Bank (PDB) have a range of regional (localized) quality issues, be it derived from x-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, or other experimental approaches. However, most PDB entries are judged by global quality metrics like R-factor, R-free, and resolution for x-ray crystallography or backbone phi-psi distribution statistics and average restraint violations for NMR. Regional quality is often ignored when PDB entries are re-used for a variety of structurally based analyses. The binding of ligands, especially ligands involved in energy metabolism, is of particular interest in many structurally focused protein studies. Using a regional quality metric that provides chemically interpretable information from electron density maps, a significant number of outliers in regional structural quality was detected across x-ray crystallographic PDB entries for proteins bound to biochemically critical ligands. In this study, a series of analyses was performed to evaluate both specific and general potential factors that could promote these outliers. In particular, these potential factors were the minimum distance to a metal ion, the minimum distance to a crystal contact, and the isotropic atomic b-factor. To evaluate these potential factors, Fisher’s exact tests were performed, using regional quality criteria of outlier (top 1%, 2.5%, 5%, or 10%) versus non-outlier compared to a potential factor metric above versus below a certain outlier cutoff. The results revealed a consistent general effect from region-specific normalized b-factors but no specific effect from metal ion contact distances and only a very weak effect from crystal contact distance as compared to the b-factor results. These findings indicate that no single specific potential factor explains a majority of the outlier ligand-bound regions, implying that human error is likely as important as these other factors. Thus, all factors, including human error, should be considered when regions of low structural quality are detected. Also, the downstream re-use of protein structures for studying ligand-bound conformations should screen the regional quality of the binding sites. Doing so prevents misinterpretation due to the presence of structural uncertainty or flaws in regions of interest.Keywords: biomacromolecular structure, coenzyme, electron density discrepancy analysis, x-ray crystallography
Procedia PDF Downloads 131186 Is the Addition of Computed Tomography with Angiography Superior to a Non-Contrast Neuroimaging Only Strategy for Patients with Suspected Stroke or Transient Ischemic Attack Presenting to the Emergency Department?
Authors: Alisha M. Ebrahim, Bijoy K. Menon, Eddy Lang, Shelagh B. Coutts, Katie Lin
Abstract:
Introduction: Frontline emergency physicians require clear and evidence-based approaches to guide neuroimaging investigations for patients presenting with suspected acute stroke or transient ischemic attack (TIA). Various forms of computed tomography (CT) are currently available for initial investigation, including non-contrast CT (NCCT), CT angiography head and neck (CTA), and CT perfusion (CTP). However, there is uncertainty around optimal imaging choice for cost-effectiveness, particularly for minor or resolved neurological symptoms. In addition to the cost of CTA and CTP testing, there is also a concern for increased incidental findings, which may contribute to the burden of overdiagnosis. Methods: In this cross-sectional observational study, analysis was conducted on 586 anonymized triage and diagnostic imaging (DI) reports for neuroimaging orders completed on patients presenting to adult emergency departments (EDs) with a suspected stroke or TIA from January-December 2019. The primary outcome of interest is the diagnostic yield of NCCT+CTA compared to NCCT alone for patients presenting to urban academic EDs with Canadian Emergency Department Information System (CEDIS) complaints of “symptoms of stroke” (specifically acute stroke and TIA indications). DI reports were coded into 4 pre-specified categories (endorsed by a panel of stroke experts): no abnormalities, clinically significant findings (requiring immediate or follow-up clinical action), incidental findings (not meeting prespecified criteria for clinical significance), and both significant and incidental findings. Standard descriptive statistics were performed. A two-sided p-value <0.05 was considered significant. Results: 75% of patients received NCCT+CTA imaging, 21% received NCCT alone, and 4% received NCCT+CTA+CTP. The diagnostic yield of NCCT+CTA imaging for prespecified clinically significant findings was 24%, compared to only 9% in those who received NCCT alone. The proportion of incidental findings was 30% in the NCCT only group and 32% in the NCCT+CTA group. CTP did not significantly increase the yield of significant or incidental findings. Conclusion: In this cohort of patients presenting with suspected stroke or TIA, an NCCT+CTA neuroimaging strategy had a higher diagnostic yield for clinically significant findings than NCCT alone without significantly increasing the number of incidental findings identified.Keywords: stroke, diagnostic yield, neuroimaging, emergency department, CT
Procedia PDF Downloads 101185 Digital Structural Monitoring Tools @ADaPT for Cracks Initiation and Growth due to Mechanical Damage Mechanism
Authors: Faizul Azly Abd Dzubir, Muhammad F. Othman
Abstract:
Conventional structural health monitoring approach for mechanical equipment uses inspection data from Non-Destructive Testing (NDT) during plant shut down window and fitness for service evaluation to estimate the integrity of the equipment that is prone to crack damage. Yet, this forecast is fraught with uncertainty because it is often based on assumptions of future operational parameters, and the prediction is not continuous or online. Advanced Diagnostic and Prognostic Technology (ADaPT) uses Acoustic Emission (AE) technology and a stochastic prognostic model to provide real-time monitoring and prediction of mechanical defects or cracks. The forecast can help the plant authority handle their cracked equipment before it ruptures, causing an unscheduled shutdown of the facility. The ADaPT employs process historical data trending, finite element analysis, fitness for service, and probabilistic statistical analysis to develop a prediction model for crack initiation and growth due to mechanical damage. The prediction model is combined with live equipment operating data for real-time prediction of the remaining life span owing to fracture. ADaPT was devised at a hot combined feed exchanger (HCFE) that had suffered creep crack damage. The ADaPT tool predicts the initiation of a crack at the top weldment area by April 2019. During the shutdown window in April 2019, a crack was discovered and repaired. Furthermore, ADaPT successfully advised the plant owner to run at full capacity and improve output by up to 7% by April 2019. ADaPT was also used on a coke drum that had extensive fatigue cracking. The initial cracks are declared safe with ADaPT, with remaining crack lifetimes extended another five (5) months, just in time for another planned facility downtime to execute repair. The prediction model, when combined with plant information data, allows plant operators to continuously monitor crack propagation caused by mechanical damage for improved maintenance planning and to avoid costly shutdowns to repair immediately.Keywords: mechanical damage, cracks, continuous monitoring tool, remaining life, acoustic emission, prognostic model
Procedia PDF Downloads 77184 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India
Authors: Disha Bhanot, Vinish Kathuria
Abstract:
This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.Keywords: distress sale, horticulture, income loss, India, price uncertainity
Procedia PDF Downloads 245183 Casusation and Criminal Responsibility
Authors: László Schmidt
Abstract:
“Post hoc ergo propter hoc” means after it, therefore because of it. In other words: If event Y followed event X, then event Y must have been caused by event X. The question of causation has long been a central theme in philosophical thought, and many different theories have been put forward. However, causality is an essentially contested concept (ECC), as it has no universally accepted definition and is used differently in everyday, scientific, and legal thinking. In the field of law, the question of causality arises mainly in the context of establishing legal liability: in criminal law and in the rules of civil law on liability for damages arising either from breach of contract or from tort. In the study some philosophical theories of causality will be presented and how these theories correlate with legal causality. It’s quite interesting when philosophical abstractions meet the pragmatic demands of jurisprudence. In Hungarian criminal judicial practice the principle of equivalence of conditions is the generally accepted and applicable standard of causation, where all necessary conditions are considered equivalent and thus a cause. The idea is that without the trigger, the subsequent outcome would not have occurred; all the conditions that led to the subsequent outcome are equivalent. In the case where the trigger that led to the result is accompanied by an additional intervening cause, including an accidental one, independent of the perpetrator, the causal link is not broken, but at most the causal link becomes looser. The importance of the intervening causes in the outcome should be given due weight in the imposition of the sentence. According to court practice if the conduct of the offender sets in motion the causal process which led to the result, it does not exclude his criminal liability and does not interrupt the causal process if other factors, such as the victim's illness, may have contributed to it. The concausa does not break the chain of causation, i.e. the existence of a causal link establish the criminal liability of the offender. Courts also adjudicates that if an act is a cause of the result if the act cannot be omitted without the result being omitted. This essentially assumes a hypothetical elimination procedure, i.e. the act must be omitted in thought and then examined to see whether the result would still occur or whether it would be omitted. On the substantive side, the essential condition for establishing the offence is that the result must be demonstrably connected with the activity committed. The provision on the assessment of the facts beyond reasonable doubt must also apply to the causal link: that is to say, the uncertainty of the causal link between the conduct and the result of the offence precludes the perpetrator from being held liable for the result. Sometimes, however, the courts do not specify in the reasons for their judgments what standard of causation they apply, i.e. on what basis they establish the existence of (legal) causation.Keywords: causation, Hungarian criminal law, responsibility, philosophy of law
Procedia PDF Downloads 42182 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning
Procedia PDF Downloads 154181 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 268180 Measuring Self-Regulation and Self-Direction in Flipped Classroom Learning
Authors: S. A. N. Danushka, T. A. Weerasinghe
Abstract:
The diverse necessities of instruction could be addressed effectively with the support of new dimensions of ICT integrated learning such as blended learning –which is a combination of face-to-face and online instruction which ensures greater flexibility in student learning and congruity of course delivery. As blended learning has been the ‘new normality' in education, many experimental and quasi-experimental research studies provide ample of evidence on its successful implementation in many fields of studies, but it is hard to justify whether blended learning could work similarly in the delivery of technology-teacher development programmes (TTDPs). The present study is bound with the particular research uncertainty, and having considered existing research approaches, the study methodology was set to decide the efficient instructional strategies for flipped classroom learning in TTDPs. In a quasi-experimental pre-test and post-test design with a mix-method research approach, the major study objective was tested with two heterogeneous samples (N=135) identified in a virtual learning environment in a Sri Lankan university. Non-randomized informal ‘before-and-after without control group’ design was employed, and two data collection methods, identical pre-test and post-test and Likert-scale questionnaires were used in the study. Selected two instructional strategies, self-directed learning (SDL) and self-regulated learning (SRL), were tested in an appropriate instructional framework with two heterogeneous samples (pre-service and in-service teachers). Data were statistically analyzed, and an efficient instructional strategy was decided via t-test, ANOVA, ANCOVA. The effectiveness of the two instructional strategy implementation models was decided via multiple linear regression analysis. ANOVA (p < 0.05) shows that age, prior-educational qualifications, gender, and work-experiences do not impact on learning achievements of the two diverse groups of learners through the instructional strategy is changed. ANCOVA (p < 0.05) analysis shows that SDL is efficient for two diverse groups of technology-teachers than SRL. Multiple linear regression (p < 0.05) analysis shows that the staged self-directed learning (SSDL) model and four-phased model of motivated self-regulated learning (COPES Model) are efficient in the delivery of course content in flipped classroom learning.Keywords: COPES model, flipped classroom learning, self-directed learning, self-regulated learning, SSDL model
Procedia PDF Downloads 197179 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty
Authors: Ammar Y. Alqahtani
Abstract:
In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics
Procedia PDF Downloads 137178 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators
Authors: Guenther Schuh, Michael Riesener, Frederic Diels
Abstract:
Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.Keywords: agile, highly iterative development, agile-indicator, product development
Procedia PDF Downloads 246177 Navigating through Uncertainty: An Explorative Study of Managers’ Experiences in China-foreign Cooperative Higher Education
Abstract:
To drive practical interpretations and applications of various policies in building the transnational education joint-ventures, middle managers learn to navigate through uncertainties and ambiguities. However, the current literature views very little about those middle managers’ experiences, perceptions, and practices. This paper takes the empirical approach and aims to uncover the middle managers’ experiences by conducting interviews, campus visits, and document analysis. Following the qualitative research method approach, the researchers gathered information from a mixture of fourteen foreign and Chinese managers. Their perceptions of the China-foreign cooperation in higher education and their perceived roles have offered important, valuable insights to this group of people’s attitudes and management performances. The diverse cultural and demographic backgrounds contributed to the significance of the study. There are four key findings. One, middle managers’ immediate micro-contexts and individual attitudes are the top two influential factors in managers’ performances. Two, the foreign middle managers showed a stronger sense of self-identity in risk-taking. Three, the Chinese middle managers preferred to see difficulties as part of their assigned responsibilities. Four, middle managers in independent universities demonstrated a stronger sense of belonging and fewer frustrations than middle managers in secondary institutes. The researchers propose that training for managers in a transnational educational setting should consider these discoveries when select fitting topics and content. In particular, middle managers should be better prepared to anticipate their everyday jobs in the micro-environment; hence, information concerning sponsor organizations’ working culture is as essential as knowing the national and local regulations, and socio-culture. Different case studies can help the managers to recognize and celebrate the diversity in transnational education. Situational stories can help them to become aware of the diverse and wide range of work contexts so that they will not feel to be left alone when facing challenges without relevant previous experience or training. Though this research is a case study based in the Chinese transnational higher education setting, the implications could be relevant and comparable to other transnational higher education situations and help to continue expanding the potential applications in this field.Keywords: educational management, middle manager performance, transnational higher education
Procedia PDF Downloads 164176 Evaluation of the Trauma System in a District Hospital Setting in Ireland
Authors: Ahmeda Ali, Mary Codd, Susan Brundage
Abstract:
Importance: This research focuses on devising and improving Health Service Executive (HSE) policy and legislation and therefore improving patient trauma care and outcomes in Ireland. Objectives: The study measures components of the Trauma System in the district hospital setting of the Cavan/Monaghan Hospital Group (CMHG), HSE, Ireland, and uses the collected data to identify the strengths and weaknesses of the CMHG Trauma System organisation, to include governance, injury data, prevention and quality improvement, scene care and facility-based care, and rehabilitation. The information will be made available to local policy makers to provide objective situational analysis to assist in future trauma service planning and service provision. Design, setting and participants: From 28 April to May 28, 2016 a cross-sectional survey using World Health Organisation (WHO) Trauma System Assessment Tool (TSAT) was conducted among healthcare professionals directly involved in the level III trauma system of CMHG. Main outcomes: Identification of the strengths and weaknesses of the Trauma System of CMHG. Results: The participants who reported inadequate funding for pre hospital (62.3%) and facility based trauma care at CMHG (52.5%) were high. Thirty four (55.7%) respondents reported that a national trauma registry (TARN) exists but electronic health records are still not used in trauma care. Twenty one respondents (34.4%) reported that there are system wide protocols for determining patient destination and adequate, comprehensive legislation governing the use of ambulances was enforced, however, there is a lack of a reliable advisory service. Over 40% of the respondents reported uncertainty of the injury prevention programmes available in Ireland; as well as the allocated government funding for injury and violence prevention. Conclusions: The results of this study contributed to a comprehensive assessment of the trauma system organisation. The major findings of the study identified three fundamental areas: the inadequate funding at CMHG, the QI techniques and corrective strategies used, and the unfamiliarity of existing prevention strategies. The findings direct the need for further research to guide future development of the trauma system at CMHG (and in Ireland as a whole) in order to maximise best practice and to improve functional and life outcomes.Keywords: trauma, education, management, system
Procedia PDF Downloads 244175 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour
Authors: Hayder H. Kareem, Shunqi Pan
Abstract:
The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW
Procedia PDF Downloads 183174 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints
Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno
Abstract:
Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.Keywords: battery energy storage, power system stability, system strength, weak power system
Procedia PDF Downloads 61173 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy
Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright
Abstract:
The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.Keywords: information entropy, communication in manufacturing, mass customisation, scheduling
Procedia PDF Downloads 245172 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 173171 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions
Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini
Abstract:
This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing
Procedia PDF Downloads 146170 Mental Health Impacts of COVID-19 on Diverse Youth and Families in Canada
Authors: Lucksini Raveendran
Abstract:
Introduction: This mixed-methods study focuses on the experiences of ethnocultural youth and families in Canada, identifying key barriers and opportunities to inform service programming and policies that can better meet their mental health needs during the COVID-19 pandemic and beyond. Methods: Mental Health Commission of Canada's Headstrong initiative administered the youth survey (April – June 2020) and family survey (June – August 2020) with a total sample size of 137 and 481 respondents, respectively. Thematic analysis was conducted to identify key challenges faced, coping strategies used, and help-seeking behaviours. A similar approach was also applied to the family survey data, but instead, a representative sample was collated to analyze geographically variable and ethnically diverse subgroups. Results and analysis: Multiple challenges have impacted families, including increased feelings of loneliness and distress from border travel restrictions, especially among those navigating pregnancy alone or managing children with developmental needs, which is often understudied. Also, marginalized groups were disproportionately affected by inequitable access to communication technologies, further deepening the digital divide. Some reported living in congregated homes with regular conflicts, thus leading to increased anxiety and exposure to violence. For many families, urbanicity and ethnicity played a key role in how families reported coping with feelings of uncertainty while managing work commitments, navigating community resources, fulfilling care responsibilities, and homeschooling children of all ages. Despite these challenges, there was evidence of post-traumatic growth and building community resiliency. Conclusions and implications for policy, practice, or additional research: There is a need to foster opportunities to promote and sustain mental health, wellness, and resilience for families through social connections. Also, intersectionality must be embedded in the collection, analysis, and application of data to improve equitable access to evidence-based and recovery-oriented mental health supports among diverse families in Canada. Lastly, address future research on the long-term COVID-19 impacts of travel border restrictions on family wellness.Keywords: mental health, youth mental health, family wellness, health equity
Procedia PDF Downloads 96