Search results for: uncertainty principle
432 Bodily Liberation and Spiritual Redemption of Black Women in Beloved: From the Perspective of Ecofeminism
Authors: Wang Huiwen
Abstract:
Since its release, Toni Morrison's novel Beloved has garnered significant international recognition, and its adaptation of a historical account has profoundly affected readers and scholars, evoking a visceral understanding of the suffering endured by black slaves. The ecofeminist approach has garnered more attention in recent times. The emergence of ecofeminism may be attributed to the feminist movement, which has subsequently evolved into several branches, including cultural ecofeminism, social ecofeminism, and socialist ecofeminism, each of which is developing its own specific characteristics. The many branches hold differing perspectives, yet they all converge on a key principle: the interconnectedness between the subjugation of women and the exploitation of nature can be traced back to a common underlying cognitive framework. Scholarly investigations into the novel Beloved have primarily centered on the cultural interpretations around the emancipation of African American women, with a predominant lens rooted in cultural ecofeminism. This thesis aims to analyze Morrison's feminist beliefs in the novel Beloved by integrating socialist and cultural ecofeminist perspectives, which seeks to challenge the limitations of essentialism within ecofeminism while also proposing a strategy to address exploitation and dismantle oppressive structures depicted in Beloved. This thesis examines the white patriarchal oppression system underlying the relationships between men and women, blacks and whites, and man and nature as shown in the novel. What the black women have been deprived of compared with the black men, white women and white men is a main clue of this research, while nature is a key complement of each chapter for their loss. The attainment of spiritual redemption and ultimate freedom is contingent upon the social revolution that enables bodily emancipation, both of which are indispensable for black women. The weighty historical pains, traumatic recollections, and compromised sense of self prompted African slaves to embark on a quest for personal redemption. The restoration of the bond between black men and women, as well as the relationship between black individuals and nature, is a clear and undeniable pathway towards the final freedom of black women in the novel Beloved.Keywords: beloved, ecofeminism, black women, nature, essentialism
Procedia PDF Downloads 66431 Minority Language Policy and Planning in Manchester, Britain
Authors: Mohamed F. Othman
Abstract:
Manchester, Britain has become the destination of immigrants from different parts of the world. As a result, it is currently home to over 150 different ethnic languages. The present study investigates minority language policy and planning at the micro-level of the city. In order to get an in-depth investigation of such a policy, it was decided to cover it from two angles: the first is the policy making process. This was aimed at getting insights on how decisions regarding the provision of government services in minority languages are taken and what criteria are employed. The second angle is the service provider; i.e. the different departments in Manchester City Council (MCC), the NHS, the courts, and police, etc., to obtain information on the actual provisions of services. Data was collected through semi-structured interviews with different personnel representing different departments in MCC, solicitors, interpreters, etc.; through the internet, e.g. the websites of MCC, NHS, courts, and police, etc.; and via personal observation of provisions of community languages in government services. The results show that Manchester’s language policy is formulated around two concepts that work simultaneously: one is concerned with providing services in community languages in order to help minorities manage their life until they acquire English, and the other with helping the integration of minorities through encouraging them to learn English. In this regard, different government services are provided in community languages, though to varying degrees, depending on the numerical strength of each individual language. Thus, it is concluded that there is awareness in MCC and other government agencies working in Manchester of the linguistic diversity of the city and there are serious attempts to meet this diversity in their services. It is worth mentioning here that providing such services in minority languages are not meant to support linguistic diversity, but rather to maintain the legal right to equal opportunities among the residents of Manchester and to avoid any misunderstanding that may result due to the language barrier, especially in such areas as hospitals, courts, and police. There is actually no explicitly-mentioned language policy regarding minorities in Manchester; rather, there is an implied or covert policy resulting from factors that are not explicitly documented. That is, there are guidelines from the central government, which emphasize the principle of equal opportunities; then the implementation of such guidelines requires providing services in the different ethnic languages.Keywords: community language, covert language policy, micro-language policy and planning, minority language
Procedia PDF Downloads 269430 The Constitutional Rights of a Child to a Clean and Healthy Environment: A Case Study in the Vaal Triangle Region
Authors: Christiena Van Der Bank, Marjone Van Der Bank, Ronelle Prinsloo
Abstract:
The constitutional right to a healthy environment and the constitutional duty imposed on the state actively to protect the environment fulfill the specific duties to prevent pollution and ecological degradation and to promote conservation. The aim of this paper is to draw attention to the relationship between child rights and the environment. The focus is to analyse government’s responses as mandated with section 24 of the Bill of Rights for ensuring the right to a clean and healthy environment. The principle of sustainability of the environment encompasses the notion of equity and the harm to the environment affects the present as well as future generations. Section 24 obliges the state to ensure that the legacy of future generations is protected, an obligation that has been said to be part of the common law. The environment is an elusive and wide concept that can mean different things to different people depending on the context in which it is used for example clean drinking water or safe food. An extensive interpretation of the term environment would include almost everything that may positively or negatively influence the quality of human life. The analysis will include assessing policy measures, legislation, budgetary measures and other measures taken by the government in order to progressively meet its constitutional obligation. The opportunity of the child to grow up in a healthy and safe environment is extremely unjustly distributed. Without a realignment of political, legal and economic conditions this situation will not fundamentally change. South Africa as a developing country that needs to meet the demand of social transformation and economic growth whilst at the same time expediting its ability to compete in global markets, the country will inevitably embark on developmental programmes as a measure for sustainable development. The courts would have to inquire into the reasonableness of those measures. Environmental threats to children’s rights must be identified, taking into account children’s specific needs and vulnerabilities, their dependence and marginalisation. Obligations of states and violations of rights must be made more visible to the general public.Keywords: environment, children rights, pollution, healthy, violation
Procedia PDF Downloads 174429 Energy Storage Modelling for Power System Reliability and Environmental Compliance
Authors: Rajesh Karki, Safal Bhattarai, Saket Adhikari
Abstract:
Reliable and economic operation of power systems are becoming extremely challenging with large scale integration of renewable energy sources due to the intermittency and uncertainty associated with renewable power generation. It is, therefore, important to make a quantitative risk assessment and explore the potential resources to mitigate such risks. Probabilistic models for different energy storage systems (ESS), such as the flywheel energy storage system (FESS) and the compressed air energy storage (CAES) incorporating specific charge/discharge performance and failure characteristics suitable for probabilistic risk assessment in power system operation and planning are presented in this paper. The proposed methodology used in FESS modelling offers flexibility to accommodate different configurations of plant topology. It is perceived that CAES has a high potential for grid-scale application, and a hybrid approach is proposed, which embeds a Monte-Carlo simulation (MCS) method in an analytical technique to develop a suitable reliability model of the CAES. The proposed ESS models are applied to a test system to investigate the economic and reliability benefits of the energy storage technologies in system operation and planning, as well as to assess their contributions in facilitating wind integration during different operating scenarios. A comparative study considering various storage system topologies are also presented. The impacts of failure rates of the critical components of ESS on the expected state of charge (SOC) and the performance of the different types of ESS during operation are illustrated with selected studies on the test system. The paper also applies the proposed models on the test system to investigate the economic and reliability benefits of the different ESS technologies and to evaluate their contributions in facilitating wind integration during different operating scenarios and system configurations. The conclusions drawn from the study results provide valuable information to help policymakers, system planners, and operators in arriving at effective and efficient policies, investment decisions, and operating strategies for planning and operation of power systems with large penetrations of renewable energy sources.Keywords: flywheel energy storage, compressed air energy storage, power system reliability, renewable energy, system planning, system operation
Procedia PDF Downloads 133428 Intellectual Property Rights Reforms and the Quality of Exported Goods
Authors: Gideon Ndubuisi
Abstract:
It is widely acknowledged that the quality of a country’s export matters more decisively than the quantity it exports. Hence, understanding the drivers of exported goods’ quality is a relevant policy question. Among other things, product quality upgrading is a considerable cost uncertainty venture that can be undertaken by an entrepreneur. Once a product is successfully upgraded, however, others can imitate the product, and hence, the returns to the pioneer entrepreneur are socialized. Along with this line, a government policy such as intellectual property rights (IPRs) protection which lessens the non-appropriability problem and incentivizes cost discovery investments becomes both a panacea in addressing the market failure and a sine qua non for an entrepreneur to engage in product quality upgrading. In addendum, product quality upgrading involves complex tasks which often require a lot of knowledge and technology sharing beyond the bounds of the firm thereby creating rooms for knowledge spillovers and imitations. Without an institution that protects upstream suppliers of knowledge and technology, technology masking occurs which bids up marginal production cost and product quality fall. Despite these clear associations between IPRs and product quality upgrading, the surging literature on the drivers of the quality of exported goods has proceeded almost in isolation of IPRs protection as a determinant. Consequently, the current study uses a difference-in-difference method to evaluate the effects of IPRs reforms on the quality of exported goods in 16 developing countries over the sample periods of 1984-2000. The study finds weak evidence that IPRs reforms increase the quality of all exported goods. When the industries are sorted into high and low-patent sensitive industries, however, we find strong indicative evidence that IPRs reform increases the quality of exported goods in high-patent sensitive sectors both in absolute terms and relative to the low-patent sensitive sectors in the post-reform period. We also obtain strong indicative evidence that it brought the quality of exported goods in the high-patent sensitive sectors closer to the quality frontier. Accounting for time-duration effects, these observed effects grow over time. The results are also largely consistent when we consider the sophistication and complexity of exported goods rather than just quality upgrades.Keywords: exports, export quality, export sophistication, intellectual property rights
Procedia PDF Downloads 125427 Dual Duality for Unifying Spacetime and Internal Symmetry
Authors: David C. Ni
Abstract:
The current efforts for Grand Unification Theory (GUT) can be classified into General Relativity, Quantum Mechanics, String Theory and the related formalisms. In the geometric approaches for extending General Relativity, the efforts are establishing global and local invariance embedded into metric formalisms, thereby additional dimensions are constructed for unifying canonical formulations, such as Hamiltonian and Lagrangian formulations. The approaches of extending Quantum Mechanics adopt symmetry principle to formulate algebra-group theories, which evolved from Maxwell formulation to Yang-Mills non-abelian gauge formulation, and thereafter manifested the Standard model. This thread of efforts has been constructing super-symmetry for mapping fermion and boson as well as gluon and graviton. The efforts of String theory currently have been evolving to so-called gauge/gravity correspondence, particularly the equivalence between type IIB string theory compactified on AdS5 × S5 and N = 4 supersymmetric Yang-Mills theory. Other efforts are also adopting cross-breeding approaches of above three formalisms as well as competing formalisms, nevertheless, the related symmetries, dualities, and correspondences are outlined as principles and techniques even these terminologies are defined diversely and often generally coined as duality. In this paper, we firstly classify these dualities from the perspective of physics. Then examine the hierarchical structure of classes from mathematical perspective referring to Coleman-Mandula theorem, Hidden Local Symmetry, Groupoid-Categorization and others. Based on Fundamental Theorems of Algebra, we argue that rather imposing effective constraints on different algebras and the related extensions, which are mainly constructed by self-breeding or self-mapping methodologies for sustaining invariance, we propose a new addition, momentum-angular momentum duality at the level of electromagnetic duality, for rationalizing the duality algebras, and then characterize this duality numerically with attempt for addressing some unsolved problems in physics and astrophysics.Keywords: general relativity, quantum mechanics, string theory, duality, symmetry, correspondence, algebra, momentum-angular-momentum
Procedia PDF Downloads 398426 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 170425 The Role of Dialogue in Shared Leadership and Team Innovative Behavior Relationship
Authors: Ander Pomposo
Abstract:
Purpose: The aim of this study was to investigate the impact that dialogue has on the relationship between shared leadership and innovative behavior and the importance of dialogue in innovation. This study wants to contribute to the literature by providing theorists and researchers a better understanding of how to move forward in the studies of moderator variables in the relationship between shared leadership and team outcomes such as innovation. Methodology: A systematic review of the literature, originally adopted from the medical sciences but also used in management and leadership studies, was conducted to synthesize research in a systematic, transparent and reproducible manner. A final sample of 48 empirical studies was scientifically synthesized. Findings: Shared leadership gives a better solution to team management challenges and goes beyond the classical, hierarchical, or vertical leadership models based on the individual leader approach. One of the outcomes that emerge from shared leadership is team innovative behavior. To intensify the relationship between shared leadership and team innovative behavior, and understand when is more effective, the moderating effects of other variables in this relationship should be examined. This synthesis of the empirical studies revealed that dialogue is a moderator variable that has an impact on the relationship between shared leadership and team innovative behavior when leadership is understood as a relational process. Dialogue is an activity between at least two speech partners trying to fulfill a collective goal and is a way of living open to people and ideas through interaction. Dialogue is productive when team members engage relationally with one another. When this happens, participants are more likely to take responsibility for the tasks they are involved and for the relationships they have with others. In this relational engagement, participants are likely to establish high-quality connections with a high degree of generativity. This study suggests that organizations should facilitate the dialogue of team members in shared leadership which has a positive impact on innovation and offers a more adaptive framework for the leadership that is needed in teams working in complex work tasks. These results uncover the necessity of more research on the role that dialogue plays in contributing to important organizational outcomes such as innovation. Case studies describing both best practices and obstacles of dialogue in team innovative behavior are necessary to gain a more detailed insight into the field. It will be interesting to see how all these fields of research evolve and are implemented in dialogue practices in the organizations that use team-based structures to deal with uncertainty, fast-changing environments, globalization and increasingly complex work.Keywords: dialogue, innovation, leadership, shared leadership, team innovative behavior
Procedia PDF Downloads 183424 Praxis-Oriented Pedagogies for Pre-Service Teachers: Teaching About and For Social Justice Through Equity Literature Circles
Authors: Joanne Robertson, Awneet Sivia
Abstract:
Preparing aspiring teachers to become advocates for social justice reflects a fundamental commitment for teacher education programs in Canada to create systemic educational change. The goal is ultimately to address inequities in K-12 education for students from multiple identity groups that have historically been marginalized and oppressed in schools. Social justice is described as an often undertheorized and vague concept in the literature, which increases the risk that teaching for social justice remains a lofty goal. Another concern is that the social justice agenda in teacher education in North America ignores pedagogies related to subject-matter knowledge and discipline-based teaching methods. The question surrounding how teacher education programs can address these issues forms the basis for the research undertaken in this study. The paper focuses on a qualitative research project that examines how an Equity Literature Circles (ELC) framework within a language arts methods course in a Bachelor of Education program may help pre-service teachers better understand the inherent relationship between literacy instructional practices and teaching about and for social justice. Grounded in the Freireian (2018) principle of praxis, this study specifically seeks to understand the impact of Equity Literature Circles on pre-service teachers’ understanding of current social justice issues (reflection), their development of professional competencies in literacy instruction (practice), and their identity as advocates of social justice (action) who address issues related to student diversity, equity, and human rights within the English Language Arts program. In this paper presentation, participants will be provided with an overview of the Equity Literature Circle framework, a summary of key findings and recommendations from the qualitative study, an annotated bibliography of suggested Young Adult novels, and opportunities for questions and dialogue.Keywords: literacy, language, equity, social justice, diversity, human rights
Procedia PDF Downloads 70423 Positive Disruption: Towards a Definition of Artist-in-Residence Impact on Organisational Creativity
Authors: Denise Bianco
Abstract:
Several studies on innovation and creativity in organisations emphasise the need to expand horizons and take on alternative and unexpected views to produce something new. This paper theorises the potential impact artists can have as creative catalysts, working embedded in non-artistic organisations. It begins from an understanding that in today's ever-changing scenario, organisations are increasingly seeking to open up new creative thinking through deviant behaviours to produce innovation and that art residencies need to be critically revised in this specific context in light of their disruptive potential. On the one hand, this paper builds upon recent contributions made on workplace creativity and related concepts of deviance and disruption. Research suggests that creativity is likely to be lower in work contexts where utter conformity is a cardinal value and higher in work contexts that show some tolerance for uncertainty and deviance. On the other hand, this paper draws attention to Artist-in-Residence as a vehicle for epistemic friction between divergent and convergent thinking, which allows the creation of unparalleled ways of knowing in the dailiness of situated and contextualised social processes. In order to do so, this contribution brings together insights from the most relevant theories on organisational creativity and unconventional agile methods such as Art Thinking and direct insights from ethnographic fieldwork in the context of embedded art residencies within work organisations to propose a redefinition of Artist-in-Residence and their potential impact on organisational creativity. The result is a re-definition of embedded Artist-in-Residence in organisational settings from a more comprehensive, multi-disciplinary, and relational perspective that builds on three focal points. First the notion that organisational creativity is a dynamic and synergistic process throughout which an idea is framed by recurrent activities subjected to multiple influences. Second, the definition of embedded Artist-in-Residence as an assemblage of dynamic, productive relations and unexpected possibilities for new networks of relationality that encourage the recombination of knowledge. Third, and most importantly, the acknowledgment that embedded residencies are, at the very essence, bi-cultural knowledge contexts where creativity flourishes as the result of open-to-change processes that are highly relational, constantly negotiated, and contextualised in time and space.Keywords: artist-in-residence, convergent and divergent thinking, creativity, creative friction, deviance and creativity
Procedia PDF Downloads 98422 The Effect of Innovation Capability and Activity, and Wider Sector Condition on the Performance of Malaysian Public Sector Innovation Policy
Authors: Razul Ikmal Ramli
Abstract:
Successful implementation of innovation is a key success formula of a great organization. Innovation will ensure competitive advantages as well as sustainability of organization in the long run. In public sector context, the role of innovation is crucial to resolve dynamic challenges of public services such as operating in economic uncertainty with limited resources, increasing operating expenditure and growing expectation among citizens towards high quality, swift and reliable public services. Acknowledging the prospect of innovation as a tool for achieving high-performance public sector, the Malaysian New Economic Model launched in the year 2011 intensified government commitment to foster innovation in the public sector. Since 2011 various initiatives have been implemented, however little is known about the performance of public sector innovation in Malaysia. Hence, by applying the national innovation system theory as a pillar, the formulated research objectives were focused on measuring the level of innovation capabilities, wider public sector condition for innovation, innovation activity, and innovation performance as well as to examine the relationship between the four constructs with innovation performance as a dependent variable. For that purpose, 1,000 sets of self-administrated survey questionnaires were distributed to heads of units and divisions of 22 Federal Ministry and Central Agencies in the administrative, security, social and economic sector. Based on 456 returned questionnaires, the descriptive analysis found that innovation capabilities, wider sector condition, innovation activities and innovation performance were rated by respondents at moderately high level. Based on Structural Equation Modelling, innovation performance was found to be influenced by innovation capability, wider sector condition for innovation and innovation activity. In addition, the analysis also found innovation activity to be the most important construct that influences innovation performance. The implication of the study concluded that the innovation policy implemented in the public sector of Malaysia sparked motivation to innovate and resulted in various forms of innovation. However, the overall achievements were not as well as they were expected to be. Thus, the study suggested for the formulation of a dedicated policy to strengthen innovation capability, wider public sector condition for innovation and innovation activity of the Malaysian public sector. Furthermore, strategic intervention needs to be focused on innovation activity as the construct plays an important role in determining the innovation performance. The success of public sector innovation implementation will not only benefit the citizens, but will also spearhead the competitiveness and sustainability of the country.Keywords: public sector, innovation, performance, innovation policy
Procedia PDF Downloads 281421 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study
Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero
Abstract:
Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries
Procedia PDF Downloads 620420 Effect of Mistranslating tRNA Alanine on Polyglutamine Aggregation
Authors: Sunidhi Syal, Rasangi Tennakoon, Patrick O'Donoghue
Abstract:
Polyglutamine (polyQ) diseases are a group of diseases related to neurodegeneration caused by repeats of the amino acid glutamine (Q) in the DNA, which translates into an elongated polyQ tract in the protein. The pathological explanation is that the polyQ tract forms cytotoxic aggregates in the neurons, leading to their degeneration. There are no cures or preventative efforts established for these diseases as of today, although the symptoms of these diseases can be relieved. This study specifically focuses on Huntington's disease, which is a type of polyQ disease in which aggregation is caused by the extended cytosine, adenine, guanine (CUG) codon repeats in the huntingtin (HTT) gene, which encodes for the huntingtin protein. Using this principle, we attempted to create six models, which included mutating wildtype tRNA alanine variant tRNA-AGC-8-1 to have glutamine anticodons CUG and UUG so serine is incorporated at glutamine sites in poly Q tracts. In the process, we were successful in obtaining tAla-8-1 CUG mutant clones in the HTTexon1 plasmids with a polyQ tract of 23Q (non-pathogenic model) and 74Q (disease model). These plasmids were transfected into mouse neuroblastoma cells to characterize protein synthesis and aggregation in normal and mistranslating cells and to investigate the effects of glutamines replaced with alanines on the disease phenotype. Notably, we observed no noteworthy differences in mean fluorescence between the CUG mutants for 23Q or 74Q; however, the Triton X-100 assay revealed a significant reduction in insoluble 74Q aggregates. We were unable to create a tAla-8-1 UUG mutant clone, and determining the difference in the effects of the two glutamine anticodons may enrich our understanding of the disease phenotype. In conclusion, by generating structural disruption with the amino acid alanine, it may be possible to find ways to minimize the toxicity of Huntington's disease caused by these polyQ aggregates. Further research is needed to advance knowledge in this field by identifying the cellular and biochemical impact of specific tRNA variants found naturally in human genomes.Keywords: Huntington's disease, polyQ, tRNA, anticodon, clone, overlap PCR
Procedia PDF Downloads 44419 Method for Identification of Through Defects of Polymer Films Applied onto Metal Parts
Authors: Yu A. Pluttsova , O. V. Vakhnina , K. B. Zhogova
Abstract:
Nowadays, many devices operate under conditions of enhanced humidity, temperature drops, fog, and vibration. To ensure long-term and uninterruptable equipment operation under adverse conditions, one applies moisture-proof films on products and electronics components, which helps to prevent corrosion, short circuit, allowing a significant increase in device lifecycle. The reliability of such moisture-proof films is mainly determined by their coating uniformity without gaps and cracks. Unprotected product edges, as well as pores in films, can cause device failure during operation. The work objective was to develop an effective, affordable, and profit-proved method for determining the presence of through defects of protective polymer films on the surface of parts made of iron and its alloys. As a diagnostic reagent, one proposed water solution of potassium ferricyanide (III) in hydrochloric acid, this changes the color from yellow to blue according to the reactions; Feº → Fe²⁺ and 4Fe²⁺ + 3[Fe³⁺(CN)₆]³⁻ → Fe ³⁺4[Fe²⁺(CN)₆]₃. There was developed the principle scheme of technological process for determining the presence of polymer films through defects on the surface of parts made of iron and its alloys. There were studied solutions with different diagnostic reagent compositions in water: from 0,1 to 25 mass fractions, %, of potassium ferricyanide (III), and from 5 to 25 mass fractions, %, of hydrochloride acid. The optimal component ratio was chosen. The developed method consists in submerging a part covered with a film into a vessel with a diagnostic reagent. In the polymer film through defect zone, the part material (ferrum) interacts with potassium ferricyanide (III), the color changes to blue. Pilot samples were tested by the developed method for the presence of through defects in the moisture-proof coating. It was revealed that all the studied parts had through defects of the polymer film coating. Thus, the claimed method efficiently reveals polymer film coating through defects on parts made of iron or its alloys, being affordable and profit-proved.Keywords: diagnostic reagent, metal parts, polimer films, through defects
Procedia PDF Downloads 150418 A Case Study of the Saudi Arabian Investment Regime
Authors: Atif Alenezi
Abstract:
The low global oil price poses economic challenges for Saudi Arabia, as oil revenues still make up a great percentage of its Gross Domestic Product (GDP). At the end of 2014, the Consultative Assembly considered a report from the Committee on Economic Affairs and Energy which highlights that the economy had not been successfully diversified. There thus exist ample reasons for modernising the Foreign Direct Investment (FDI) regime, primarily to achieve and maintain prosperity and facilitate peace in the region. Therefore, this paper aims at identifying specific problems with the existing FDI regime in Saudi Arabia and subsequently some solutions to those problems. Saudi Arabia adopted its first specific legislation in 1956, which imposed significant restrictions on foreign ownership. Since then, Saudi Arabia has modernised its FDI framework with the passing of the Foreign Capital Investment Act 1979 and the Foreign Investment Law2000 and the accompanying Executive Rules 2000 and the recently adopted Implementing Regulations 2014.Nonetheless, the legislative provisions contain various gaps and the failure to address these gaps creates risks and uncertainty for investors. For instance, the important topic of mergers and acquisitions has not been addressed in the Foreign Investment Law 2000. The circumstances in which expropriation can be considered to be in the public interest have not been defined. Moreover, Saudi Arabia has not entered into many bilateral investment treaties (BITs). This has an effect on the investment climate, as foreign investors are not afforded typical rights. An analysis of the BITs which have been entered into reveals that the national treatment standard and stabilisation, umbrella or renegotiation provisions have not been included. This is problematic since the 2000 Act does not spell out the applicable standard in accordance with which foreign investors should be treated. Moreover, the most-favoured-nation (MFN) or fair and equitable treatment (FET) standards have not been put on a statutory footing. Whilst the Arbitration Act 2012 permits that investment disputes can be internationalised, restrictions have been retained. The effectiveness of international arbitration is further undermined because Saudi Arabia does not enforce non-domestic arbitral awards which contravene public policy. Furthermore, the reservation to the Convention on the Settlement of Investment Disputes allows Saudi Arabia to exclude petroleum and sovereign disputes. Interviews with foreign investors, who operate in Saudi Arabia highlight additional issues. Saudi Arabia ought not to procrastinate far-reaching structural reforms.Keywords: FDI, Saudi, BITs, law
Procedia PDF Downloads 410417 Electron Density Discrepancy Analysis of Energy Metabolism Coenzymes
Authors: Alan Luo, Hunter N. B. Moseley
Abstract:
Many macromolecular structure entries in the Protein Data Bank (PDB) have a range of regional (localized) quality issues, be it derived from x-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, or other experimental approaches. However, most PDB entries are judged by global quality metrics like R-factor, R-free, and resolution for x-ray crystallography or backbone phi-psi distribution statistics and average restraint violations for NMR. Regional quality is often ignored when PDB entries are re-used for a variety of structurally based analyses. The binding of ligands, especially ligands involved in energy metabolism, is of particular interest in many structurally focused protein studies. Using a regional quality metric that provides chemically interpretable information from electron density maps, a significant number of outliers in regional structural quality was detected across x-ray crystallographic PDB entries for proteins bound to biochemically critical ligands. In this study, a series of analyses was performed to evaluate both specific and general potential factors that could promote these outliers. In particular, these potential factors were the minimum distance to a metal ion, the minimum distance to a crystal contact, and the isotropic atomic b-factor. To evaluate these potential factors, Fisher’s exact tests were performed, using regional quality criteria of outlier (top 1%, 2.5%, 5%, or 10%) versus non-outlier compared to a potential factor metric above versus below a certain outlier cutoff. The results revealed a consistent general effect from region-specific normalized b-factors but no specific effect from metal ion contact distances and only a very weak effect from crystal contact distance as compared to the b-factor results. These findings indicate that no single specific potential factor explains a majority of the outlier ligand-bound regions, implying that human error is likely as important as these other factors. Thus, all factors, including human error, should be considered when regions of low structural quality are detected. Also, the downstream re-use of protein structures for studying ligand-bound conformations should screen the regional quality of the binding sites. Doing so prevents misinterpretation due to the presence of structural uncertainty or flaws in regions of interest.Keywords: biomacromolecular structure, coenzyme, electron density discrepancy analysis, x-ray crystallography
Procedia PDF Downloads 132416 Juxtaposing Constitutionalism and Democratic Process in Nigeria Vis a Vis the South African Perspective
Authors: Onyinyechi Lilian Uche
Abstract:
Limiting arbitrariness and political power in governance is expressed in the concept of constitutionalism. Constitutionalism acknowledges the necessity for government but insists upon a limitation being placed upon its powers. It is therefore clear that the essence of constitutionalism is obviation of arbitrariness in governance and maximisation of liberty with adequate and expedient restraint on government. The doctrine of separation of powers accompanied by a system of checks and balances in Nigeria like many other African countries is marked by elements of ‘personal government’ and this has raised questions about whether the apparent separation of powers provided for in the Nigerian Constitution is not just a euphemism for the hegemony of the executive over the other two arms of government; the legislature and the judiciary. Another question raised in the article is whether the doctrine is merely an abstract philosophical inheritance that lacks both content and relevance to the realities of the country and region today? The current happenings in Nigeria and most African countries such as the flagrant disregard of court orders by the Executive, indicate clearly that the concept constitutionalism ordinarily goes beyond mere form and strikes at the substance of a constitution. It, therefore, involves a consideration of whether there are provisions in the constitution which limit arbitrariness in the exercise of political powers by providing checks and balances upon such exercise. These questions underscore the need for Africa to craft its own understanding of the separation of powers between the arms of government in furtherance of good governance as it has been seen that it is possible to have a constitution in place which may just be a mere statement of unenforceable ‘rights’ or may be bereft of provisions guaranteeing liberty or adequate and necessary restraint on exercise of government. This paper seeks to expatiate on the importance of the nexus between constitutionalism and democratic process and a juxtaposition of practices between Nigeria and South Africa. The article notes that an abstract analysis of constitutionalism without recourse to the democratic process is meaningless and also analyses the structure of government of some selected African countries. These are examined the extent to which the doctrine operates within the arms of government and concludes that it should not just be regarded as a general constitutional principle but made rigid or perhaps effective and binding through law and institutional reforms.Keywords: checks and balances, constitutionalism, democratic process, separation of power
Procedia PDF Downloads 129415 Is the Addition of Computed Tomography with Angiography Superior to a Non-Contrast Neuroimaging Only Strategy for Patients with Suspected Stroke or Transient Ischemic Attack Presenting to the Emergency Department?
Authors: Alisha M. Ebrahim, Bijoy K. Menon, Eddy Lang, Shelagh B. Coutts, Katie Lin
Abstract:
Introduction: Frontline emergency physicians require clear and evidence-based approaches to guide neuroimaging investigations for patients presenting with suspected acute stroke or transient ischemic attack (TIA). Various forms of computed tomography (CT) are currently available for initial investigation, including non-contrast CT (NCCT), CT angiography head and neck (CTA), and CT perfusion (CTP). However, there is uncertainty around optimal imaging choice for cost-effectiveness, particularly for minor or resolved neurological symptoms. In addition to the cost of CTA and CTP testing, there is also a concern for increased incidental findings, which may contribute to the burden of overdiagnosis. Methods: In this cross-sectional observational study, analysis was conducted on 586 anonymized triage and diagnostic imaging (DI) reports for neuroimaging orders completed on patients presenting to adult emergency departments (EDs) with a suspected stroke or TIA from January-December 2019. The primary outcome of interest is the diagnostic yield of NCCT+CTA compared to NCCT alone for patients presenting to urban academic EDs with Canadian Emergency Department Information System (CEDIS) complaints of “symptoms of stroke” (specifically acute stroke and TIA indications). DI reports were coded into 4 pre-specified categories (endorsed by a panel of stroke experts): no abnormalities, clinically significant findings (requiring immediate or follow-up clinical action), incidental findings (not meeting prespecified criteria for clinical significance), and both significant and incidental findings. Standard descriptive statistics were performed. A two-sided p-value <0.05 was considered significant. Results: 75% of patients received NCCT+CTA imaging, 21% received NCCT alone, and 4% received NCCT+CTA+CTP. The diagnostic yield of NCCT+CTA imaging for prespecified clinically significant findings was 24%, compared to only 9% in those who received NCCT alone. The proportion of incidental findings was 30% in the NCCT only group and 32% in the NCCT+CTA group. CTP did not significantly increase the yield of significant or incidental findings. Conclusion: In this cohort of patients presenting with suspected stroke or TIA, an NCCT+CTA neuroimaging strategy had a higher diagnostic yield for clinically significant findings than NCCT alone without significantly increasing the number of incidental findings identified.Keywords: stroke, diagnostic yield, neuroimaging, emergency department, CT
Procedia PDF Downloads 101414 Digital Structural Monitoring Tools @ADaPT for Cracks Initiation and Growth due to Mechanical Damage Mechanism
Authors: Faizul Azly Abd Dzubir, Muhammad F. Othman
Abstract:
Conventional structural health monitoring approach for mechanical equipment uses inspection data from Non-Destructive Testing (NDT) during plant shut down window and fitness for service evaluation to estimate the integrity of the equipment that is prone to crack damage. Yet, this forecast is fraught with uncertainty because it is often based on assumptions of future operational parameters, and the prediction is not continuous or online. Advanced Diagnostic and Prognostic Technology (ADaPT) uses Acoustic Emission (AE) technology and a stochastic prognostic model to provide real-time monitoring and prediction of mechanical defects or cracks. The forecast can help the plant authority handle their cracked equipment before it ruptures, causing an unscheduled shutdown of the facility. The ADaPT employs process historical data trending, finite element analysis, fitness for service, and probabilistic statistical analysis to develop a prediction model for crack initiation and growth due to mechanical damage. The prediction model is combined with live equipment operating data for real-time prediction of the remaining life span owing to fracture. ADaPT was devised at a hot combined feed exchanger (HCFE) that had suffered creep crack damage. The ADaPT tool predicts the initiation of a crack at the top weldment area by April 2019. During the shutdown window in April 2019, a crack was discovered and repaired. Furthermore, ADaPT successfully advised the plant owner to run at full capacity and improve output by up to 7% by April 2019. ADaPT was also used on a coke drum that had extensive fatigue cracking. The initial cracks are declared safe with ADaPT, with remaining crack lifetimes extended another five (5) months, just in time for another planned facility downtime to execute repair. The prediction model, when combined with plant information data, allows plant operators to continuously monitor crack propagation caused by mechanical damage for improved maintenance planning and to avoid costly shutdowns to repair immediately.Keywords: mechanical damage, cracks, continuous monitoring tool, remaining life, acoustic emission, prognostic model
Procedia PDF Downloads 77413 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University
Authors: Saadia Elamin
Abstract:
The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools
Procedia PDF Downloads 476412 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India
Authors: Disha Bhanot, Vinish Kathuria
Abstract:
This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.Keywords: distress sale, horticulture, income loss, India, price uncertainity
Procedia PDF Downloads 246411 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning
Procedia PDF Downloads 156410 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 268409 Meditation and Insight Interpretation Using Quantum Circle Based-on Experiment and Quantum Relativity Formalism
Authors: Somnath Bhattachryya, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin
Abstract:
In this study and research on meditation and insight, the design and experiment with electronic circuits to manipulate the meditators' mental circles that call the chakras to have the same size is proposed. The shape of the circuit is 4-ports, called an add-drop multiplexer, that studies the meditation structure called the four-mindfulness foundation, then uses an AC power signal as an input instead of the meditation time function, where various behaviors with the method of re-filtering the signal (successive filtering), like eight noble paths. Start by inputting a signal at a frequency that causes the velocity of the wave on the perimeter of the circuit to cause particles to have the speed of light in a vacuum. The signal changes from electromagnetic waves and matter waves according to the velocity (frequency) until it reaches the point of the relativistic limit. The electromagnetic waves are transformed into photons with properties of wave-particle overcoming the limits of the speed of light. As for the matter wave, it will travel to the other side and cannot pass through the relativistic limit, called a shadow signal (echo) that can have power from increasing speed but cannot create speed faster than light or insight. In the experiment, the only the side where the velocity is positive, only where the speed above light or the corresponding frequency indicates intelligence. Other side(echo) can be done by changing the input signal to the other side of the circuit to get the same result. But there is no intelligence or speed beyond light. It is also used to study the stretching, contraction of time and wormholes that can be applied for teleporting, Bose-Einstein condensate and teleprinting, quantum telephone. The teleporting can happen throughout the system with wave-particle and echo, which is when the speed of the particle is faster than the stretching or contraction of time, the particle will submerge in the wormhole, when the destination and time are determined, will travel through the wormhole. In a wormhole, time can determine in the future and the past. The experimental results using the microstrip circuit have been found to be by the principle of quantum relativity, which can be further developed for both tools and meditation practitioners for quantum technology.Keywords: quantu meditation, insight picture, quantum circuit, absolute time, teleportation
Procedia PDF Downloads 64408 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings
Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller
Abstract:
Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram
Procedia PDF Downloads 266407 Measuring Self-Regulation and Self-Direction in Flipped Classroom Learning
Authors: S. A. N. Danushka, T. A. Weerasinghe
Abstract:
The diverse necessities of instruction could be addressed effectively with the support of new dimensions of ICT integrated learning such as blended learning –which is a combination of face-to-face and online instruction which ensures greater flexibility in student learning and congruity of course delivery. As blended learning has been the ‘new normality' in education, many experimental and quasi-experimental research studies provide ample of evidence on its successful implementation in many fields of studies, but it is hard to justify whether blended learning could work similarly in the delivery of technology-teacher development programmes (TTDPs). The present study is bound with the particular research uncertainty, and having considered existing research approaches, the study methodology was set to decide the efficient instructional strategies for flipped classroom learning in TTDPs. In a quasi-experimental pre-test and post-test design with a mix-method research approach, the major study objective was tested with two heterogeneous samples (N=135) identified in a virtual learning environment in a Sri Lankan university. Non-randomized informal ‘before-and-after without control group’ design was employed, and two data collection methods, identical pre-test and post-test and Likert-scale questionnaires were used in the study. Selected two instructional strategies, self-directed learning (SDL) and self-regulated learning (SRL), were tested in an appropriate instructional framework with two heterogeneous samples (pre-service and in-service teachers). Data were statistically analyzed, and an efficient instructional strategy was decided via t-test, ANOVA, ANCOVA. The effectiveness of the two instructional strategy implementation models was decided via multiple linear regression analysis. ANOVA (p < 0.05) shows that age, prior-educational qualifications, gender, and work-experiences do not impact on learning achievements of the two diverse groups of learners through the instructional strategy is changed. ANCOVA (p < 0.05) analysis shows that SDL is efficient for two diverse groups of technology-teachers than SRL. Multiple linear regression (p < 0.05) analysis shows that the staged self-directed learning (SSDL) model and four-phased model of motivated self-regulated learning (COPES Model) are efficient in the delivery of course content in flipped classroom learning.Keywords: COPES model, flipped classroom learning, self-directed learning, self-regulated learning, SSDL model
Procedia PDF Downloads 200406 Temporality, Place and Autobiography in J.M. Coetzee’s 'Summertime'
Authors: Barbara Janari
Abstract:
In this paper it is argued that the effect of the disjunctive temporality in Summertime (the third of J.M. Coetzee’s fictionalised memoirs) is two-fold: firstly, it reflects the memoir’s ambivalent, contradictory representations of place in order to emphasize the fractured sense of self growing up in South Africa during apartheid entailed for Coetzee. Secondly, it reconceives the autobiographical discourse as one that foregrounds the inherent fictionality of all texts. The memoir’s narrative is filtered through intricate textual strategies that disrupt the chronological movement of the narrative, evoking the labyrinthine ways in which the past and present intersect and interpenetrate each other. It is framed by entries from Coetzee’s Notebooks: it opens with entries that cover the years 1972–1975, and ends with a number of undated fragments from his Notebooks. Most of the entries include a short ‘memo’ at the end, added between 1999 and 2000. While the memos follow the Notebook entries in the text, they are separated by decades. Between the Notebook entries is a series of interviews conducted by Vincent, the text’s putative biographer, between 2007 and 2008, based on recollections from five people who had known Coetzee in the 1970s – a key period in John’s life as it marks both his return to South Africa after a failed emigration attempt to America, and the beginning of his writing career, with the publication of Dusklands in 1974. The relationship between the memoir’s various parts is a key feature of Coetzee’s representation of place in Summertime, which is constructed as a composite one in which the principle of reflexive referencing has to be adopted. In other words, readers have to suspend individual references temporarily until the relationships between the parts have been connected to each other. In order to apprehend meaning in the text, the disparate narrative elements have to first be tied together. In this text, then, the experience of time as ordered and chronological is ruptured. Instead, the memoir’s themes and patterns become apparent most clearly through reflexive referencing, by which relationships between disparate sections of the text are linked. The image of the fictional John that emerges from the text is a composite of this John and the author, J.M. Coetzee, and is one which embodies Coetzee’s often fraught relationship with his home country, South Africa.Keywords: autobiography, place, reflexive referencing, temporality
Procedia PDF Downloads 77405 On or Off-Line: Dilemmas in Using Online Teaching-Learning in In-Service Teacher Education
Authors: Orly Sela
Abstract:
The lecture discusses a Language Teaching program in a Teacher Education College in northern Israel. An on-line course was added to the program in order to keep on-campus attendance at a minimum, thus allowing the students to keep their full-time jobs in school. In addition, the use of educational technology to allow students to study anytime anywhere, in keeping with 21st-century innovative teaching-learning practices, was also an issue, as was the wish for this course to serve as a model which the students could then possibly use in their K-12 teaching. On the other hand, there were strong considerations against including an online course in the program. The students in the program were mostly Israeli-Arab married women with young children, living in a traditional society which places a strong emphasis on the place of the woman as a wife, mother, and home-maker. In addition, as teachers, they used much of their free time on school-related tasks. Having careers at the same time as studying was ground-breaking for these women, and using their time at home for studying rather than taking care of their families may have been simply too much to ask of them. At the end of the course, feedback was collected through an online questionnaire including both open and closed questions. The data collected shows that the students believed in online teaching-learning in principle, but had trouble implementing it in practice. This evidence raised the question of whether or not such a course should be included in a graduate program for mature, professional students, particular women with families living in a traditional society. This issue is not relevant to Israel alone, but also to academic institutions worldwide serving such populations. The lecture discusses this issue, sharing the researcher’s conclusions with the audience. Based on the evidence offered, it is the researcher’s conclusion that online education should, indeed, be offered to such audiences. However, the courses should be designed with the students’ special needs in mind, with emphasis placed on initial planning and course organization based on acknowledgment of the teaching context; modeling of online teaching/learning suited for in-service teacher education, and special attention paid to social-constructivist aspects of learning.Keywords: course design, in-service teacher-education, mature students, online teaching/learning
Procedia PDF Downloads 233404 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty
Authors: Ammar Y. Alqahtani
Abstract:
In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics
Procedia PDF Downloads 138403 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators
Authors: Guenther Schuh, Michael Riesener, Frederic Diels
Abstract:
Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.Keywords: agile, highly iterative development, agile-indicator, product development
Procedia PDF Downloads 247