Search results for: uncertainty principle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2017

Search results for: uncertainty principle

427 Aerosol Radiative Forcing Over Indian Subcontinent for 2000-2021 Using Satellite Observations

Authors: Shreya Srivastava, Sushovan Ghosh, Sagnik Dey

Abstract:

Aerosols directly affect Earth’s radiation budget by scattering and absorbing incoming solar radiation and outgoing terrestrial radiation. While the uncertainty in aerosol radiative forcing (ARF) has decreased over the years, it is still higher than that of greenhouse gas forcing, particularly in the South Asian region, due to high heterogeneity in their chemical properties. Understanding the Spatio-temporal heterogeneity of aerosol composition is critical in improving climate prediction. Studies using satellite data, in-situ and aircraft measurements, and models have investigated the Spatio-temporal variability of aerosol characteristics. In this study, we have taken aerosol data from Multi-angle Imaging Spectro-Radiometer (MISR) level-2 version 23 aerosol products retrieved at 4.4 km and radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 21 years (2000-2021) over the Indian subcontinent. MISR aerosol product includes size and shapes segregated aerosol optical depth (AOD), Angstrom exponent (AE), and single scattering albedo (SSA). Additionally, 74 aerosol mixtures are included in version 23 data that is used for aerosol speciation. We have seasonally mapped aerosol optical and microphysical properties from MISR for India at quarter degrees resolution. Results show strong Spatio-temporal variability, with a constant higher value of AOD for the Indo-Gangetic Plain (IGP). The contribution of small-size particles is higher throughout the year, spatially during winter months. SSA is found to be overestimated where absorbing particles are present. The climatological map of short wave (SW) ARF at the top of the atmosphere (TOA) shows a strong cooling except in only a few places (values ranging from +2.5o to -22.5o). Cooling due to aerosols is higher in the absence of clouds. Higher negative values of ARF are found over the IGP region, given the high aerosol concentration above the region. Surface ARF values are everywhere negative for our study domain, with higher values in clear conditions. The results strongly correlate with AOD from MISR and ARF from CERES.

Keywords: aerosol Radiative forcing (ARF), aerosol composition, single scattering albedo (SSA), CERES

Procedia PDF Downloads 20
426 Parametric Investigation of Aircraft Door’s Emergency Power Assist System (EPAS)

Authors: Marshal D. Kafle, Jun H. Kim, Hyun W. Been, Kyoung M. Min

Abstract:

Fluid viscous damping systems are well suited for many air vehicles subjected to shock and vibration. These damping system work with the principle of viscous fluid throttling through the orifice to create huge pressure difference between compression and rebound chamber and obtain the required damping force. One application of such systems is its use in aircraft door system to counteract the door’s velocity and safely stop it. In exigency situations like crash or emergency landing where the door doesn’t open easily, possibly due to unusually tilting of fuselage or some obstacles or intrusion of debris obstruction to move the parts of the door, such system can be combined with other systems to provide needed force to forcefully open the door and also securely stop it simultaneously within the required time i.e.less than 8seconds. In the present study, a hydraulic system called snubber along with other systems like actuator, gas bottle assembly which together known as emergency power assist system (EPAS) is designed, built and experimentally studied to check the magnitude of angular velocity, damping force and time required to effectively open the door. Whenever needed, the gas pressure from the bottle is released to actuate the actuator and at the same time pull the snubber’s piston to operate the emergency opening of the door. Such EPAS installed in the suspension arm of the aircraft door is studied explicitly changing parameters like orifice size, oil level, oil viscosity and bypass valve gap and its spring of the snubber at varying temperature to generate the optimum design case. Comparative analysis of the EPAS at several cases is done and conclusions are made. It is found that during emergency condition, the systemopening time and angular velocity, when snubber with 0.3mm piston and shaft orifice and bypass valve gap of 0.5 mm with its original spring is used,shows significant improvement over the old ones.

Keywords: aircraft door damper, bypass valve, emergency power assist system, hydraulic damper, oil viscosity

Procedia PDF Downloads 398
425 Evidence of a Negativity Bias in the Keywords of Scientific Papers

Authors: Kseniia Zviagintseva, Brett Buttliere

Abstract:

Science is fundamentally a problem-solving enterprise, and scientists pay more attention to the negative things, that cause them dissonance and negative affective state of uncertainty or contradiction. While this is agreed upon by philosophers of science, there are few empirical demonstrations. Here we examine the keywords from those papers published by PLoS in 2014 and show with several sentiment analyzers that negative keywords are studied more than positive keywords. Our dataset is the 927,406 keywords of 32,870 scientific articles in all fields published in 2014 by the journal PLOS ONE (collected from Altmetric.com). Counting how often the 47,415 unique keywords are used, we can examine whether those negative topics are studied more than positive. In order to find the sentiment of the keywords, we utilized two sentiment analysis tools, Hu and Liu (2004) and SentiStrength (2014). The results below are for Hu and Liu as these are the less convincing results. The average keyword was utilized 19.56 times, with half of the keywords being utilized only 1 time and the maximum number of uses being 18,589 times. The keywords identified as negative were utilized 37.39 times, on average, with the positive keywords being utilized 14.72 times and the neutral keywords - 19.29, on average. This difference is only marginally significant, with an F value of 2.82, with a p of .05, but one must keep in mind that more than half of the keywords are utilized only 1 time, artificially increasing the variance and driving the effect size down. To examine more closely, we looked at those top 25 most utilized keywords that have a sentiment. Among the top 25, there are only two positive words, ‘care’ and ‘dynamics’, in position numbers 5 and 13 respectively, with all the rest being identified as negative. ‘Diseases’ is the most studied keyword with 8,790 uses, with ‘cancer’ and ‘infectious’ being the second and fourth most utilized sentiment-laden keywords. The sentiment analysis is not perfect though, as the words ‘diseases’ and ‘disease’ are split by taking 1st and 3rd positions. Combining them, they remain as the most common sentiment-laden keyword, being utilized 13,236 times. More than just splitting the words, the sentiment analyzer logs ‘regression’ and ‘rat’ as negative, and these should probably be considered false positives. Despite these potential problems, the effect is apparent, as even the positive keywords like ‘care’ could or should be considered negative, since this word is most commonly utilized as a part of ‘health care’, ‘critical care’ or ‘quality of care’ and generally associated with how to improve it. All in all, the results suggest that negative concepts are studied more, also providing support for the notion that science is most generally a problem-solving enterprise. The results also provide evidence that negativity and contradiction are related to greater productivity and positive outcomes.

Keywords: bibliometrics, keywords analysis, negativity bias, positive and negative words, scientific papers, scientometrics

Procedia PDF Downloads 159
424 Threshold Sand Detection Limits for Acoustic Monitors in Multiphase Flow

Authors: Vinod Ponnagandla, Brenton McLaury, Siamack Shirazi

Abstract:

Sand production can lead to deposition of particles or erosion. Low production rates resulting in deposition can partially clog systems and cause under deposit corrosion. Commercially available nonintrusive acoustic sand detectors are attractive as they claim to detect sand production. Acoustic sand detectors are used during oil and gas production; however, operators often do not know the threshold detection limits of these devices. It is imperative to know the detection limits to appropriately plan for cleaning of separation equipment or examine risk of erosion. These monitors are based on detecting the acoustic signature of sand as the particles impact the pipe walls. The objective of this work is to determine threshold detection limits for acoustic sand monitors that are commercially available. The minimum threshold sand concentration that can be detected in a pipe are determined as a function of flowing gas and liquid velocities. A large scale flow loop with a 4-inch test section is utilized. Commercially available sand monitors (ClampOn and Roxar) are evaluated for different flow regimes, sand sizes and pipe orientation (vertical and horizontal). The manufacturers’ recommend that the monitors be placed on a bend to maximize the number of particle impacts, so results are shown for monitors placed at 45 and 90 degree positions in a bend. Acoustic sand monitors that clamp to the outside of pipe are passive and listen for solid particle impact noise. The threshold sand rate is calculated by eliminating the background noise created by the flow of gas and liquid in the pipe for various flow regimes that are generated in horizontal and vertical test sections. The average sand sizes examined are 150 and 300 microns. For stratified and bubbly flows the threshold sand rates are much higher than other flow regimes such as slug and annular flow regimes that are investigated. However, the background noise generated by slug flow regime is very high and cause a high uncertainty in detection limits. The threshold sand rates for annular flow and dry gas conditions are the lowest because of high gas velocities. The effects of monitor placement around elbows that are in vertical and horizontal pipes are also examined for 150 micron. The results show that the threshold sand rates that are detected in vertical orientation are generally lower for all various flow regimes that are investigated.

Keywords: acoustic monitor, sand, multiphase flow, threshold

Procedia PDF Downloads 372
423 Household Solid Waste Generation per Capita and Management Behaviour in Mthatha City, South Africa

Authors: Vuyayo Tsheleza, Simbarashe Ndhleve, Christopher Mpundu Musampa

Abstract:

Mismanagement of waste is continuously emerging as a rising malpractice in most developing countries, especially in fast growing cities. Household solid waste in Mthatha has been reported to be one of the problems facing the city and is overwhelming local authorities, as it is beyond the environment and management capacity of the existing waste management system. This study estimates per capita waste generation, quantity of different waste types generated by inhabitants of formal and informal settlements in Mthatha as well as waste management practices in the aforementioned socio-economic stratums. A total of 206 households were systematically selected for the study using stratified random sampling categorized into formal and informal settlements. Data on household waste generation rate, composition, awareness, and household waste management behaviour and practices was gathered through mixed methods. Sampled households from both formal and informal settlements with a total of 684 people generated 1949kg per week. This translates to 2.84kg per capita per week. On average, the rate of solid waste generation per capita was 0.40 kg per day for a person living in informal settlement and 0.56 kg per day person living in formal settlement. When recorded in descending order, the proportion food waste accounted for the most generated waste at approximately 23.7%, followed by disposable nappies at 15%, papers and cardboards 13.34%, glass 13.03%, metals at 11.99%, plastics at 11.58%, residue at 5.17, textiles 3.93%, with leather and rubber at 2.28% as the least generated waste type. Different waste management practices were reported in both formal and informal settlements with formal settlements proving to be more concerned about environmental management as compared to their counterparts, informal settlement. Understanding attitudes and perceptions on waste management, waste types and per capita solid waste generation rate can help evolve appropriate waste management strategies based on the principle of reduce, re-use, recycle, environmental sound disposal and also assist in projecting future waste generation rate. These results can be utilized as input when designing growing cities’ waste management plans.

Keywords: awareness, characterisation, per capita, quantification

Procedia PDF Downloads 267
422 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia

Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah

Abstract:

Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.

Keywords: overview of porosity classification, reservoir characterization, microporosity, carbonate reservoir

Procedia PDF Downloads 120
421 The Impact of Supply Chain Strategy and Integration on Supply Chain Performance: Supply Chain Vulnerability as a Moderator

Authors: Yi-Chun Kuo, Jo-Chieh Lin

Abstract:

The objective of a supply chain strategy is to reduce waste and increase efficiency to attain cost benefits, and to guarantee supply chain flexibility when facing the ever-changing market environment in order to meet customer requirements. Strategy implementation aims to fulfill common goals and attain benefits by integrating upstream and downstream enterprises, sharing information, conducting common planning, and taking part in decision making, so as to enhance the overall performance of the supply chain. With the rise of outsourcing and globalization, the increasing dependence on suppliers and customers and the rapid development of information technology, the complexity and uncertainty of the supply chain have intensified, and supply chain vulnerability has surged, resulting in adverse effects on supply chain performance. Thus, this study aims to use supply chain vulnerability as a moderating variable and apply structural equation modeling (SEM) to determine the relationships among supply chain strategy, supply chain integration, and supply chain performance, as well as the moderating effect of supply chain vulnerability on supply chain performance. The data investigation of this study was questionnaires which were collected from the management level of enterprises in Taiwan and China, 149 questionnaires were received. The result of confirmatory factor analysis shows that the path coefficients of supply chain strategy on supply chain integration and supply chain performance are positive (0.497, t= 4.914; 0.748, t= 5.919), having a significantly positive effect. Supply chain integration is also significantly positively correlated to supply chain performance (0.192, t = 2.273). The moderating effects of supply chain vulnerability on supply chain strategy and supply chain integration to supply chain performance are significant (7.407; 4.687). In Taiwan, 97.73% of enterprises are small- and medium-sized enterprises (SMEs) focusing on receiving original equipment manufacturer (OEM) and original design manufacturer (ODM) orders. In order to meet the needs of customers and to respond to market changes, these enterprises especially focus on supply chain flexibility and their integration with the upstream and downstream enterprises. According to the observation of this research, the effect of supply chain vulnerability on supply chain performance is significant, and so enterprises need to attach great importance to the management of supply chain risk and conduct risk analysis on their suppliers in order to formulate response strategies when facing emergency situations. At the same time, risk management is incorporated into the supply chain so as to reduce the effect of supply chain vulnerability on the overall supply chain performance.

Keywords: supply chain integration, supply chain performance, supply chain vulnerability, structural equation modeling

Procedia PDF Downloads 288
420 Bodily Liberation and Spiritual Redemption of Black Women in Beloved: From the Perspective of Ecofeminism

Authors: Wang Huiwen

Abstract:

Since its release, Toni Morrison's novel Beloved has garnered significant international recognition, and its adaptation of a historical account has profoundly affected readers and scholars, evoking a visceral understanding of the suffering endured by black slaves. The ecofeminist approach has garnered more attention in recent times. The emergence of ecofeminism may be attributed to the feminist movement, which has subsequently evolved into several branches, including cultural ecofeminism, social ecofeminism, and socialist ecofeminism, each of which is developing its own specific characteristics. The many branches hold differing perspectives, yet they all converge on a key principle: the interconnectedness between the subjugation of women and the exploitation of nature can be traced back to a common underlying cognitive framework. Scholarly investigations into the novel Beloved have primarily centered on the cultural interpretations around the emancipation of African American women, with a predominant lens rooted in cultural ecofeminism. This thesis aims to analyze Morrison's feminist beliefs in the novel Beloved by integrating socialist and cultural ecofeminist perspectives, which seeks to challenge the limitations of essentialism within ecofeminism while also proposing a strategy to address exploitation and dismantle oppressive structures depicted in Beloved. This thesis examines the white patriarchal oppression system underlying the relationships between men and women, blacks and whites, and man and nature as shown in the novel. What the black women have been deprived of compared with the black men, white women and white men is a main clue of this research, while nature is a key complement of each chapter for their loss. The attainment of spiritual redemption and ultimate freedom is contingent upon the social revolution that enables bodily emancipation, both of which are indispensable for black women. The weighty historical pains, traumatic recollections, and compromised sense of self prompted African slaves to embark on a quest for personal redemption. The restoration of the bond between black men and women, as well as the relationship between black individuals and nature, is a clear and undeniable pathway towards the final freedom of black women in the novel Beloved.

Keywords: beloved, ecofeminism, black women, nature, essentialism

Procedia PDF Downloads 37
419 Minority Language Policy and Planning in Manchester, Britain

Authors: Mohamed F. Othman

Abstract:

Manchester, Britain has become the destination of immigrants from different parts of the world. As a result, it is currently home to over 150 different ethnic languages. The present study investigates minority language policy and planning at the micro-level of the city. In order to get an in-depth investigation of such a policy, it was decided to cover it from two angles: the first is the policy making process. This was aimed at getting insights on how decisions regarding the provision of government services in minority languages are taken and what criteria are employed. The second angle is the service provider; i.e. the different departments in Manchester City Council (MCC), the NHS, the courts, and police, etc., to obtain information on the actual provisions of services. Data was collected through semi-structured interviews with different personnel representing different departments in MCC, solicitors, interpreters, etc.; through the internet, e.g. the websites of MCC, NHS, courts, and police, etc.; and via personal observation of provisions of community languages in government services. The results show that Manchester’s language policy is formulated around two concepts that work simultaneously: one is concerned with providing services in community languages in order to help minorities manage their life until they acquire English, and the other with helping the integration of minorities through encouraging them to learn English. In this regard, different government services are provided in community languages, though to varying degrees, depending on the numerical strength of each individual language. Thus, it is concluded that there is awareness in MCC and other government agencies working in Manchester of the linguistic diversity of the city and there are serious attempts to meet this diversity in their services. It is worth mentioning here that providing such services in minority languages are not meant to support linguistic diversity, but rather to maintain the legal right to equal opportunities among the residents of Manchester and to avoid any misunderstanding that may result due to the language barrier, especially in such areas as hospitals, courts, and police. There is actually no explicitly-mentioned language policy regarding minorities in Manchester; rather, there is an implied or covert policy resulting from factors that are not explicitly documented. That is, there are guidelines from the central government, which emphasize the principle of equal opportunities; then the implementation of such guidelines requires providing services in the different ethnic languages.

Keywords: community language, covert language policy, micro-language policy and planning, minority language

Procedia PDF Downloads 249
418 Energy Storage Modelling for Power System Reliability and Environmental Compliance

Authors: Rajesh Karki, Safal Bhattarai, Saket Adhikari

Abstract:

Reliable and economic operation of power systems are becoming extremely challenging with large scale integration of renewable energy sources due to the intermittency and uncertainty associated with renewable power generation. It is, therefore, important to make a quantitative risk assessment and explore the potential resources to mitigate such risks. Probabilistic models for different energy storage systems (ESS), such as the flywheel energy storage system (FESS) and the compressed air energy storage (CAES) incorporating specific charge/discharge performance and failure characteristics suitable for probabilistic risk assessment in power system operation and planning are presented in this paper. The proposed methodology used in FESS modelling offers flexibility to accommodate different configurations of plant topology. It is perceived that CAES has a high potential for grid-scale application, and a hybrid approach is proposed, which embeds a Monte-Carlo simulation (MCS) method in an analytical technique to develop a suitable reliability model of the CAES. The proposed ESS models are applied to a test system to investigate the economic and reliability benefits of the energy storage technologies in system operation and planning, as well as to assess their contributions in facilitating wind integration during different operating scenarios. A comparative study considering various storage system topologies are also presented. The impacts of failure rates of the critical components of ESS on the expected state of charge (SOC) and the performance of the different types of ESS during operation are illustrated with selected studies on the test system. The paper also applies the proposed models on the test system to investigate the economic and reliability benefits of the different ESS technologies and to evaluate their contributions in facilitating wind integration during different operating scenarios and system configurations. The conclusions drawn from the study results provide valuable information to help policymakers, system planners, and operators in arriving at effective and efficient policies, investment decisions, and operating strategies for planning and operation of power systems with large penetrations of renewable energy sources.

Keywords: flywheel energy storage, compressed air energy storage, power system reliability, renewable energy, system planning, system operation

Procedia PDF Downloads 98
417 The Constitutional Rights of a Child to a Clean and Healthy Environment: A Case Study in the Vaal Triangle Region

Authors: Christiena Van Der Bank, Marjone Van Der Bank, Ronelle Prinsloo

Abstract:

The constitutional right to a healthy environment and the constitutional duty imposed on the state actively to protect the environment fulfill the specific duties to prevent pollution and ecological degradation and to promote conservation. The aim of this paper is to draw attention to the relationship between child rights and the environment. The focus is to analyse government’s responses as mandated with section 24 of the Bill of Rights for ensuring the right to a clean and healthy environment. The principle of sustainability of the environment encompasses the notion of equity and the harm to the environment affects the present as well as future generations. Section 24 obliges the state to ensure that the legacy of future generations is protected, an obligation that has been said to be part of the common law. The environment is an elusive and wide concept that can mean different things to different people depending on the context in which it is used for example clean drinking water or safe food. An extensive interpretation of the term environment would include almost everything that may positively or negatively influence the quality of human life. The analysis will include assessing policy measures, legislation, budgetary measures and other measures taken by the government in order to progressively meet its constitutional obligation. The opportunity of the child to grow up in a healthy and safe environment is extremely unjustly distributed. Without a realignment of political, legal and economic conditions this situation will not fundamentally change. South Africa as a developing country that needs to meet the demand of social transformation and economic growth whilst at the same time expediting its ability to compete in global markets, the country will inevitably embark on developmental programmes as a measure for sustainable development. The courts would have to inquire into the reasonableness of those measures. Environmental threats to children’s rights must be identified, taking into account children’s specific needs and vulnerabilities, their dependence and marginalisation. Obligations of states and violations of rights must be made more visible to the general public.

Keywords: environment, children rights, pollution, healthy, violation

Procedia PDF Downloads 147
416 Intellectual Property Rights Reforms and the Quality of Exported Goods

Authors: Gideon Ndubuisi

Abstract:

It is widely acknowledged that the quality of a country’s export matters more decisively than the quantity it exports. Hence, understanding the drivers of exported goods’ quality is a relevant policy question. Among other things, product quality upgrading is a considerable cost uncertainty venture that can be undertaken by an entrepreneur. Once a product is successfully upgraded, however, others can imitate the product, and hence, the returns to the pioneer entrepreneur are socialized. Along with this line, a government policy such as intellectual property rights (IPRs) protection which lessens the non-appropriability problem and incentivizes cost discovery investments becomes both a panacea in addressing the market failure and a sine qua non for an entrepreneur to engage in product quality upgrading. In addendum, product quality upgrading involves complex tasks which often require a lot of knowledge and technology sharing beyond the bounds of the firm thereby creating rooms for knowledge spillovers and imitations. Without an institution that protects upstream suppliers of knowledge and technology, technology masking occurs which bids up marginal production cost and product quality fall. Despite these clear associations between IPRs and product quality upgrading, the surging literature on the drivers of the quality of exported goods has proceeded almost in isolation of IPRs protection as a determinant. Consequently, the current study uses a difference-in-difference method to evaluate the effects of IPRs reforms on the quality of exported goods in 16 developing countries over the sample periods of 1984-2000. The study finds weak evidence that IPRs reforms increase the quality of all exported goods. When the industries are sorted into high and low-patent sensitive industries, however, we find strong indicative evidence that IPRs reform increases the quality of exported goods in high-patent sensitive sectors both in absolute terms and relative to the low-patent sensitive sectors in the post-reform period. We also obtain strong indicative evidence that it brought the quality of exported goods in the high-patent sensitive sectors closer to the quality frontier. Accounting for time-duration effects, these observed effects grow over time. The results are also largely consistent when we consider the sophistication and complexity of exported goods rather than just quality upgrades.

Keywords: exports, export quality, export sophistication, intellectual property rights

Procedia PDF Downloads 90
415 Dual Duality for Unifying Spacetime and Internal Symmetry

Authors: David C. Ni

Abstract:

The current efforts for Grand Unification Theory (GUT) can be classified into General Relativity, Quantum Mechanics, String Theory and the related formalisms. In the geometric approaches for extending General Relativity, the efforts are establishing global and local invariance embedded into metric formalisms, thereby additional dimensions are constructed for unifying canonical formulations, such as Hamiltonian and Lagrangian formulations. The approaches of extending Quantum Mechanics adopt symmetry principle to formulate algebra-group theories, which evolved from Maxwell formulation to Yang-Mills non-abelian gauge formulation, and thereafter manifested the Standard model. This thread of efforts has been constructing super-symmetry for mapping fermion and boson as well as gluon and graviton. The efforts of String theory currently have been evolving to so-called gauge/gravity correspondence, particularly the equivalence between type IIB string theory compactified on AdS5 × S5 and N = 4 supersymmetric Yang-Mills theory. Other efforts are also adopting cross-breeding approaches of above three formalisms as well as competing formalisms, nevertheless, the related symmetries, dualities, and correspondences are outlined as principles and techniques even these terminologies are defined diversely and often generally coined as duality. In this paper, we firstly classify these dualities from the perspective of physics. Then examine the hierarchical structure of classes from mathematical perspective referring to Coleman-Mandula theorem, Hidden Local Symmetry, Groupoid-Categorization and others. Based on Fundamental Theorems of Algebra, we argue that rather imposing effective constraints on different algebras and the related extensions, which are mainly constructed by self-breeding or self-mapping methodologies for sustaining invariance, we propose a new addition, momentum-angular momentum duality at the level of electromagnetic duality, for rationalizing the duality algebras, and then characterize this duality numerically with attempt for addressing some unsolved problems in physics and astrophysics.

Keywords: general relativity, quantum mechanics, string theory, duality, symmetry, correspondence, algebra, momentum-angular-momentum

Procedia PDF Downloads 360
414 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 141
413 The Role of Dialogue in Shared Leadership and Team Innovative Behavior Relationship

Authors: Ander Pomposo

Abstract:

Purpose: The aim of this study was to investigate the impact that dialogue has on the relationship between shared leadership and innovative behavior and the importance of dialogue in innovation. This study wants to contribute to the literature by providing theorists and researchers a better understanding of how to move forward in the studies of moderator variables in the relationship between shared leadership and team outcomes such as innovation. Methodology: A systematic review of the literature, originally adopted from the medical sciences but also used in management and leadership studies, was conducted to synthesize research in a systematic, transparent and reproducible manner. A final sample of 48 empirical studies was scientifically synthesized. Findings: Shared leadership gives a better solution to team management challenges and goes beyond the classical, hierarchical, or vertical leadership models based on the individual leader approach. One of the outcomes that emerge from shared leadership is team innovative behavior. To intensify the relationship between shared leadership and team innovative behavior, and understand when is more effective, the moderating effects of other variables in this relationship should be examined. This synthesis of the empirical studies revealed that dialogue is a moderator variable that has an impact on the relationship between shared leadership and team innovative behavior when leadership is understood as a relational process. Dialogue is an activity between at least two speech partners trying to fulfill a collective goal and is a way of living open to people and ideas through interaction. Dialogue is productive when team members engage relationally with one another. When this happens, participants are more likely to take responsibility for the tasks they are involved and for the relationships they have with others. In this relational engagement, participants are likely to establish high-quality connections with a high degree of generativity. This study suggests that organizations should facilitate the dialogue of team members in shared leadership which has a positive impact on innovation and offers a more adaptive framework for the leadership that is needed in teams working in complex work tasks. These results uncover the necessity of more research on the role that dialogue plays in contributing to important organizational outcomes such as innovation. Case studies describing both best practices and obstacles of dialogue in team innovative behavior are necessary to gain a more detailed insight into the field. It will be interesting to see how all these fields of research evolve and are implemented in dialogue practices in the organizations that use team-based structures to deal with uncertainty, fast-changing environments, globalization and increasingly complex work.

Keywords: dialogue, innovation, leadership, shared leadership, team innovative behavior

Procedia PDF Downloads 152
412 Positive Disruption: Towards a Definition of Artist-in-Residence Impact on Organisational Creativity

Authors: Denise Bianco

Abstract:

Several studies on innovation and creativity in organisations emphasise the need to expand horizons and take on alternative and unexpected views to produce something new. This paper theorises the potential impact artists can have as creative catalysts, working embedded in non-artistic organisations. It begins from an understanding that in today's ever-changing scenario, organisations are increasingly seeking to open up new creative thinking through deviant behaviours to produce innovation and that art residencies need to be critically revised in this specific context in light of their disruptive potential. On the one hand, this paper builds upon recent contributions made on workplace creativity and related concepts of deviance and disruption. Research suggests that creativity is likely to be lower in work contexts where utter conformity is a cardinal value and higher in work contexts that show some tolerance for uncertainty and deviance. On the other hand, this paper draws attention to Artist-in-Residence as a vehicle for epistemic friction between divergent and convergent thinking, which allows the creation of unparalleled ways of knowing in the dailiness of situated and contextualised social processes. In order to do so, this contribution brings together insights from the most relevant theories on organisational creativity and unconventional agile methods such as Art Thinking and direct insights from ethnographic fieldwork in the context of embedded art residencies within work organisations to propose a redefinition of Artist-in-Residence and their potential impact on organisational creativity. The result is a re-definition of embedded Artist-in-Residence in organisational settings from a more comprehensive, multi-disciplinary, and relational perspective that builds on three focal points. First the notion that organisational creativity is a dynamic and synergistic process throughout which an idea is framed by recurrent activities subjected to multiple influences. Second, the definition of embedded Artist-in-Residence as an assemblage of dynamic, productive relations and unexpected possibilities for new networks of relationality that encourage the recombination of knowledge. Third, and most importantly, the acknowledgment that embedded residencies are, at the very essence, bi-cultural knowledge contexts where creativity flourishes as the result of open-to-change processes that are highly relational, constantly negotiated, and contextualised in time and space.

Keywords: artist-in-residence, convergent and divergent thinking, creativity, creative friction, deviance and creativity

Procedia PDF Downloads 67
411 The Effect of Innovation Capability and Activity, and Wider Sector Condition on the Performance of Malaysian Public Sector Innovation Policy

Authors: Razul Ikmal Ramli

Abstract:

Successful implementation of innovation is a key success formula of a great organization. Innovation will ensure competitive advantages as well as sustainability of organization in the long run. In public sector context, the role of innovation is crucial to resolve dynamic challenges of public services such as operating in economic uncertainty with limited resources, increasing operating expenditure and growing expectation among citizens towards high quality, swift and reliable public services. Acknowledging the prospect of innovation as a tool for achieving high-performance public sector, the Malaysian New Economic Model launched in the year 2011 intensified government commitment to foster innovation in the public sector. Since 2011 various initiatives have been implemented, however little is known about the performance of public sector innovation in Malaysia. Hence, by applying the national innovation system theory as a pillar, the formulated research objectives were focused on measuring the level of innovation capabilities, wider public sector condition for innovation, innovation activity, and innovation performance as well as to examine the relationship between the four constructs with innovation performance as a dependent variable. For that purpose, 1,000 sets of self-administrated survey questionnaires were distributed to heads of units and divisions of 22 Federal Ministry and Central Agencies in the administrative, security, social and economic sector. Based on 456 returned questionnaires, the descriptive analysis found that innovation capabilities, wider sector condition, innovation activities and innovation performance were rated by respondents at moderately high level. Based on Structural Equation Modelling, innovation performance was found to be influenced by innovation capability, wider sector condition for innovation and innovation activity. In addition, the analysis also found innovation activity to be the most important construct that influences innovation performance. The implication of the study concluded that the innovation policy implemented in the public sector of Malaysia sparked motivation to innovate and resulted in various forms of innovation. However, the overall achievements were not as well as they were expected to be. Thus, the study suggested for the formulation of a dedicated policy to strengthen innovation capability, wider public sector condition for innovation and innovation activity of the Malaysian public sector. Furthermore, strategic intervention needs to be focused on innovation activity as the construct plays an important role in determining the innovation performance. The success of public sector innovation implementation will not only benefit the citizens, but will also spearhead the competitiveness and sustainability of the country.

Keywords: public sector, innovation, performance, innovation policy

Procedia PDF Downloads 252
410 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study

Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero

Abstract:

Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.

Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries

Procedia PDF Downloads 588
409 Praxis-Oriented Pedagogies for Pre-Service Teachers: Teaching About and For Social Justice Through Equity Literature Circles

Authors: Joanne Robertson, Awneet Sivia

Abstract:

Preparing aspiring teachers to become advocates for social justice reflects a fundamental commitment for teacher education programs in Canada to create systemic educational change. The goal is ultimately to address inequities in K-12 education for students from multiple identity groups that have historically been marginalized and oppressed in schools. Social justice is described as an often undertheorized and vague concept in the literature, which increases the risk that teaching for social justice remains a lofty goal. Another concern is that the social justice agenda in teacher education in North America ignores pedagogies related to subject-matter knowledge and discipline-based teaching methods. The question surrounding how teacher education programs can address these issues forms the basis for the research undertaken in this study. The paper focuses on a qualitative research project that examines how an Equity Literature Circles (ELC) framework within a language arts methods course in a Bachelor of Education program may help pre-service teachers better understand the inherent relationship between literacy instructional practices and teaching about and for social justice. Grounded in the Freireian (2018) principle of praxis, this study specifically seeks to understand the impact of Equity Literature Circles on pre-service teachers’ understanding of current social justice issues (reflection), their development of professional competencies in literacy instruction (practice), and their identity as advocates of social justice (action) who address issues related to student diversity, equity, and human rights within the English Language Arts program. In this paper presentation, participants will be provided with an overview of the Equity Literature Circle framework, a summary of key findings and recommendations from the qualitative study, an annotated bibliography of suggested Young Adult novels, and opportunities for questions and dialogue.

Keywords: literacy, language, equity, social justice, diversity, human rights

Procedia PDF Downloads 41
408 A Case Study of the Saudi Arabian Investment Regime

Authors: Atif Alenezi

Abstract:

The low global oil price poses economic challenges for Saudi Arabia, as oil revenues still make up a great percentage of its Gross Domestic Product (GDP). At the end of 2014, the Consultative Assembly considered a report from the Committee on Economic Affairs and Energy which highlights that the economy had not been successfully diversified. There thus exist ample reasons for modernising the Foreign Direct Investment (FDI) regime, primarily to achieve and maintain prosperity and facilitate peace in the region. Therefore, this paper aims at identifying specific problems with the existing FDI regime in Saudi Arabia and subsequently some solutions to those problems. Saudi Arabia adopted its first specific legislation in 1956, which imposed significant restrictions on foreign ownership. Since then, Saudi Arabia has modernised its FDI framework with the passing of the Foreign Capital Investment Act 1979 and the Foreign Investment Law2000 and the accompanying Executive Rules 2000 and the recently adopted Implementing Regulations 2014.Nonetheless, the legislative provisions contain various gaps and the failure to address these gaps creates risks and uncertainty for investors. For instance, the important topic of mergers and acquisitions has not been addressed in the Foreign Investment Law 2000. The circumstances in which expropriation can be considered to be in the public interest have not been defined. Moreover, Saudi Arabia has not entered into many bilateral investment treaties (BITs). This has an effect on the investment climate, as foreign investors are not afforded typical rights. An analysis of the BITs which have been entered into reveals that the national treatment standard and stabilisation, umbrella or renegotiation provisions have not been included. This is problematic since the 2000 Act does not spell out the applicable standard in accordance with which foreign investors should be treated. Moreover, the most-favoured-nation (MFN) or fair and equitable treatment (FET) standards have not been put on a statutory footing. Whilst the Arbitration Act 2012 permits that investment disputes can be internationalised, restrictions have been retained. The effectiveness of international arbitration is further undermined because Saudi Arabia does not enforce non-domestic arbitral awards which contravene public policy. Furthermore, the reservation to the Convention on the Settlement of Investment Disputes allows Saudi Arabia to exclude petroleum and sovereign disputes. Interviews with foreign investors, who operate in Saudi Arabia highlight additional issues. Saudi Arabia ought not to procrastinate far-reaching structural reforms.

Keywords: FDI, Saudi, BITs, law

Procedia PDF Downloads 383
407 Method for Identification of Through Defects of Polymer Films Applied onto Metal Parts

Authors: Yu A. Pluttsova , O. V. Vakhnina , K. B. Zhogova

Abstract:

Nowadays, many devices operate under conditions of enhanced humidity, temperature drops, fog, and vibration. To ensure long-term and uninterruptable equipment operation under adverse conditions, one applies moisture-proof films on products and electronics components, which helps to prevent corrosion, short circuit, allowing a significant increase in device lifecycle. The reliability of such moisture-proof films is mainly determined by their coating uniformity without gaps and cracks. Unprotected product edges, as well as pores in films, can cause device failure during operation. The work objective was to develop an effective, affordable, and profit-proved method for determining the presence of through defects of protective polymer films on the surface of parts made of iron and its alloys. As a diagnostic reagent, one proposed water solution of potassium ferricyanide (III) in hydrochloric acid, this changes the color from yellow to blue according to the reactions; Feº → Fe²⁺ and 4Fe²⁺ + 3[Fe³⁺(CN)₆]³⁻ → Fe ³⁺4[Fe²⁺(CN)₆]₃. There was developed the principle scheme of technological process for determining the presence of polymer films through defects on the surface of parts made of iron and its alloys. There were studied solutions with different diagnostic reagent compositions in water: from 0,1 to 25 mass fractions, %, of potassium ferricyanide (III), and from 5 to 25 mass fractions, %, of hydrochloride acid. The optimal component ratio was chosen. The developed method consists in submerging a part covered with a film into a vessel with a diagnostic reagent. In the polymer film through defect zone, the part material (ferrum) interacts with potassium ferricyanide (III), the color changes to blue. Pilot samples were tested by the developed method for the presence of through defects in the moisture-proof coating. It was revealed that all the studied parts had through defects of the polymer film coating. Thus, the claimed method efficiently reveals polymer film coating through defects on parts made of iron or its alloys, being affordable and profit-proved.

Keywords: diagnostic reagent, metal parts, polimer films, through defects

Procedia PDF Downloads 124
406 Electron Density Discrepancy Analysis of Energy Metabolism Coenzymes

Authors: Alan Luo, Hunter N. B. Moseley

Abstract:

Many macromolecular structure entries in the Protein Data Bank (PDB) have a range of regional (localized) quality issues, be it derived from x-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, or other experimental approaches. However, most PDB entries are judged by global quality metrics like R-factor, R-free, and resolution for x-ray crystallography or backbone phi-psi distribution statistics and average restraint violations for NMR. Regional quality is often ignored when PDB entries are re-used for a variety of structurally based analyses. The binding of ligands, especially ligands involved in energy metabolism, is of particular interest in many structurally focused protein studies. Using a regional quality metric that provides chemically interpretable information from electron density maps, a significant number of outliers in regional structural quality was detected across x-ray crystallographic PDB entries for proteins bound to biochemically critical ligands. In this study, a series of analyses was performed to evaluate both specific and general potential factors that could promote these outliers. In particular, these potential factors were the minimum distance to a metal ion, the minimum distance to a crystal contact, and the isotropic atomic b-factor. To evaluate these potential factors, Fisher’s exact tests were performed, using regional quality criteria of outlier (top 1%, 2.5%, 5%, or 10%) versus non-outlier compared to a potential factor metric above versus below a certain outlier cutoff. The results revealed a consistent general effect from region-specific normalized b-factors but no specific effect from metal ion contact distances and only a very weak effect from crystal contact distance as compared to the b-factor results. These findings indicate that no single specific potential factor explains a majority of the outlier ligand-bound regions, implying that human error is likely as important as these other factors. Thus, all factors, including human error, should be considered when regions of low structural quality are detected. Also, the downstream re-use of protein structures for studying ligand-bound conformations should screen the regional quality of the binding sites. Doing so prevents misinterpretation due to the presence of structural uncertainty or flaws in regions of interest.

Keywords: biomacromolecular structure, coenzyme, electron density discrepancy analysis, x-ray crystallography

Procedia PDF Downloads 96
405 Is the Addition of Computed Tomography with Angiography Superior to a Non-Contrast Neuroimaging Only Strategy for Patients with Suspected Stroke or Transient Ischemic Attack Presenting to the Emergency Department?

Authors: Alisha M. Ebrahim, Bijoy K. Menon, Eddy Lang, Shelagh B. Coutts, Katie Lin

Abstract:

Introduction: Frontline emergency physicians require clear and evidence-based approaches to guide neuroimaging investigations for patients presenting with suspected acute stroke or transient ischemic attack (TIA). Various forms of computed tomography (CT) are currently available for initial investigation, including non-contrast CT (NCCT), CT angiography head and neck (CTA), and CT perfusion (CTP). However, there is uncertainty around optimal imaging choice for cost-effectiveness, particularly for minor or resolved neurological symptoms. In addition to the cost of CTA and CTP testing, there is also a concern for increased incidental findings, which may contribute to the burden of overdiagnosis. Methods: In this cross-sectional observational study, analysis was conducted on 586 anonymized triage and diagnostic imaging (DI) reports for neuroimaging orders completed on patients presenting to adult emergency departments (EDs) with a suspected stroke or TIA from January-December 2019. The primary outcome of interest is the diagnostic yield of NCCT+CTA compared to NCCT alone for patients presenting to urban academic EDs with Canadian Emergency Department Information System (CEDIS) complaints of “symptoms of stroke” (specifically acute stroke and TIA indications). DI reports were coded into 4 pre-specified categories (endorsed by a panel of stroke experts): no abnormalities, clinically significant findings (requiring immediate or follow-up clinical action), incidental findings (not meeting prespecified criteria for clinical significance), and both significant and incidental findings. Standard descriptive statistics were performed. A two-sided p-value <0.05 was considered significant. Results: 75% of patients received NCCT+CTA imaging, 21% received NCCT alone, and 4% received NCCT+CTA+CTP. The diagnostic yield of NCCT+CTA imaging for prespecified clinically significant findings was 24%, compared to only 9% in those who received NCCT alone. The proportion of incidental findings was 30% in the NCCT only group and 32% in the NCCT+CTA group. CTP did not significantly increase the yield of significant or incidental findings. Conclusion: In this cohort of patients presenting with suspected stroke or TIA, an NCCT+CTA neuroimaging strategy had a higher diagnostic yield for clinically significant findings than NCCT alone without significantly increasing the number of incidental findings identified.

Keywords: stroke, diagnostic yield, neuroimaging, emergency department, CT

Procedia PDF Downloads 74
404 Digital Structural Monitoring Tools @ADaPT for Cracks Initiation and Growth due to Mechanical Damage Mechanism

Authors: Faizul Azly Abd Dzubir, Muhammad F. Othman

Abstract:

Conventional structural health monitoring approach for mechanical equipment uses inspection data from Non-Destructive Testing (NDT) during plant shut down window and fitness for service evaluation to estimate the integrity of the equipment that is prone to crack damage. Yet, this forecast is fraught with uncertainty because it is often based on assumptions of future operational parameters, and the prediction is not continuous or online. Advanced Diagnostic and Prognostic Technology (ADaPT) uses Acoustic Emission (AE) technology and a stochastic prognostic model to provide real-time monitoring and prediction of mechanical defects or cracks. The forecast can help the plant authority handle their cracked equipment before it ruptures, causing an unscheduled shutdown of the facility. The ADaPT employs process historical data trending, finite element analysis, fitness for service, and probabilistic statistical analysis to develop a prediction model for crack initiation and growth due to mechanical damage. The prediction model is combined with live equipment operating data for real-time prediction of the remaining life span owing to fracture. ADaPT was devised at a hot combined feed exchanger (HCFE) that had suffered creep crack damage. The ADaPT tool predicts the initiation of a crack at the top weldment area by April 2019. During the shutdown window in April 2019, a crack was discovered and repaired. Furthermore, ADaPT successfully advised the plant owner to run at full capacity and improve output by up to 7% by April 2019. ADaPT was also used on a coke drum that had extensive fatigue cracking. The initial cracks are declared safe with ADaPT, with remaining crack lifetimes extended another five (5) months, just in time for another planned facility downtime to execute repair. The prediction model, when combined with plant information data, allows plant operators to continuously monitor crack propagation caused by mechanical damage for improved maintenance planning and to avoid costly shutdowns to repair immediately.

Keywords: mechanical damage, cracks, continuous monitoring tool, remaining life, acoustic emission, prognostic model

Procedia PDF Downloads 45
403 Juxtaposing Constitutionalism and Democratic Process in Nigeria Vis a Vis the South African Perspective

Authors: Onyinyechi Lilian Uche

Abstract:

Limiting arbitrariness and political power in governance is expressed in the concept of constitutionalism. Constitutionalism acknowledges the necessity for government but insists upon a limitation being placed upon its powers. It is therefore clear that the essence of constitutionalism is obviation of arbitrariness in governance and maximisation of liberty with adequate and expedient restraint on government. The doctrine of separation of powers accompanied by a system of checks and balances in Nigeria like many other African countries is marked by elements of ‘personal government’ and this has raised questions about whether the apparent separation of powers provided for in the Nigerian Constitution is not just a euphemism for the hegemony of the executive over the other two arms of government; the legislature and the judiciary. Another question raised in the article is whether the doctrine is merely an abstract philosophical inheritance that lacks both content and relevance to the realities of the country and region today? The current happenings in Nigeria and most African countries such as the flagrant disregard of court orders by the Executive, indicate clearly that the concept constitutionalism ordinarily goes beyond mere form and strikes at the substance of a constitution. It, therefore, involves a consideration of whether there are provisions in the constitution which limit arbitrariness in the exercise of political powers by providing checks and balances upon such exercise. These questions underscore the need for Africa to craft its own understanding of the separation of powers between the arms of government in furtherance of good governance as it has been seen that it is possible to have a constitution in place which may just be a mere statement of unenforceable ‘rights’ or may be bereft of provisions guaranteeing liberty or adequate and necessary restraint on exercise of government. This paper seeks to expatiate on the importance of the nexus between constitutionalism and democratic process and a juxtaposition of practices between Nigeria and South Africa. The article notes that an abstract analysis of constitutionalism without recourse to the democratic process is meaningless and also analyses the structure of government of some selected African countries. These are examined the extent to which the doctrine operates within the arms of government and concludes that it should not just be regarded as a general constitutional principle but made rigid or perhaps effective and binding through law and institutional reforms.

Keywords: checks and balances, constitutionalism, democratic process, separation of power

Procedia PDF Downloads 99
402 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 215
401 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University

Authors: Saadia Elamin

Abstract:

The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.

Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools

Procedia PDF Downloads 441
400 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 125
399 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 243
398 Measuring Self-Regulation and Self-Direction in Flipped Classroom Learning

Authors: S. A. N. Danushka, T. A. Weerasinghe

Abstract:

The diverse necessities of instruction could be addressed effectively with the support of new dimensions of ICT integrated learning such as blended learning –which is a combination of face-to-face and online instruction which ensures greater flexibility in student learning and congruity of course delivery. As blended learning has been the ‘new normality' in education, many experimental and quasi-experimental research studies provide ample of evidence on its successful implementation in many fields of studies, but it is hard to justify whether blended learning could work similarly in the delivery of technology-teacher development programmes (TTDPs). The present study is bound with the particular research uncertainty, and having considered existing research approaches, the study methodology was set to decide the efficient instructional strategies for flipped classroom learning in TTDPs. In a quasi-experimental pre-test and post-test design with a mix-method research approach, the major study objective was tested with two heterogeneous samples (N=135) identified in a virtual learning environment in a Sri Lankan university. Non-randomized informal ‘before-and-after without control group’ design was employed, and two data collection methods, identical pre-test and post-test and Likert-scale questionnaires were used in the study. Selected two instructional strategies, self-directed learning (SDL) and self-regulated learning (SRL), were tested in an appropriate instructional framework with two heterogeneous samples (pre-service and in-service teachers). Data were statistically analyzed, and an efficient instructional strategy was decided via t-test, ANOVA, ANCOVA. The effectiveness of the two instructional strategy implementation models was decided via multiple linear regression analysis. ANOVA (p < 0.05) shows that age, prior-educational qualifications, gender, and work-experiences do not impact on learning achievements of the two diverse groups of learners through the instructional strategy is changed. ANCOVA (p < 0.05) analysis shows that SDL is efficient for two diverse groups of technology-teachers than SRL. Multiple linear regression (p < 0.05) analysis shows that the staged self-directed learning (SSDL) model and four-phased model of motivated self-regulated learning (COPES Model) are efficient in the delivery of course content in flipped classroom learning.

Keywords: COPES model, flipped classroom learning, self-directed learning, self-regulated learning, SSDL model

Procedia PDF Downloads 157