Search results for: rhetorical theory
358 Mapping the Early History of Common Law Education in England, 1292-1500
Authors: Malcolm Richardson, Gabriele Richardson
Abstract:
This paper illustrates how historical problems can be studied successfully using GIS even in cases in which data, in the modern sense, is fragmentary. The overall problem under investigation is how early (1300-1500) English schools of Common Law moved from apprenticeship training in random individual London inns run in part by clerks of the royal chancery to become what is widely called 'the Third University of England,' a recognized system of independent but connected legal inns. This paper focuses on the preparatory legal inns, called the Inns of Chancery, rather than the senior (and still existing) Inns of Court. The immediate problem studied in this paper is how the junior legal inns were organized, staffed, and located from 1292 to about 1500, and what maps tell us about the role of the chancery clerks as managers of legal inns. The authors first uncovered the names of all chancery clerks of the period, most of them unrecorded in histories, from archival sources in the National Archives, Kew. Then they matched the names with London property leases. Using ArcGIS, the legal inns and their owners were plotted on a series of maps covering the period 1292 to 1500. The results show a distinct pattern of ownership of the legal inns and suggest a narrative that would help explain why the Inns of Chancery became serious centers of learning during the fifteenth century. In brief, lower-ranking chancery clerks, always looking for sources of income, discovered by 1370 that legal inns could be a source of income. Since chancery clerks were intimately involved with writs and other legal forms, and since the chancery itself had a long-standing training system, these clerks opened their own legal inns to train fledgling lawyers, estate managers, and scriveners. The maps clearly show growth patterns of ownership by the chancery clerks for both legal inns and other London properties in the areas of Holborn and The Strand between 1450 and 1417. However, the maps also show that a royal ordinance of 1417 forbidding chancery clerks to live with lawyers, law students, and other non-chancery personnel had an immediate effect, and properties in that area of London leased by chancery clerks simply stop after 1417. The long-term importance of the patterns shown in the maps is that while the presence of chancery clerks in the legal inns likely created a more coherent education system, their removal forced the legal profession, suddenly without a hostelry managerial class, to professionalize the inns and legal education themselves. Given the number and social status of members of the legal inns, the effect on English education was to free legal education from the limits of chancery clerk education (the clerks were not practicing common lawyers) and to enable it to become broader in theory and practice, in fact, a kind of 'finishing school' for the governing (if not noble) class.Keywords: GIS, law, London, education
Procedia PDF Downloads 174357 Influencing Factors and Mechanism of Patient Engagement in Healthcare: A Survey in China
Authors: Qing Wu, Xuchun Ye, Kirsten Corazzini
Abstract:
Objective: It is increasingly recognized that patients’ rational and meaningful engagement in healthcare could make important contributions to their health care and safety management. However, recent evidence indicated that patients' actual roles in healthcare didn’t match their desired roles, and many patients reported a less active role than desired, which suggested that patient engagement in healthcare may be influenced by various factors. This study aimed to analyze influencing factors on patient engagement and explore the influence mechanism, which will be expected to contribute to the strategy development of patient engagement in healthcare. Methods: On the basis of analyzing the literature and theory study, the research framework was developed. According to the research framework, a cross-sectional survey was employed using the behavior and willingness of patient engagement in healthcare questionnaire, Chinese version All Aspects of Health Literacy Scale, Facilitation of Patient Involvement Scale and Wake Forest Physician Trust Scale, and other influencing factor related scales. A convenience sample of 580 patients was recruited from 8 general hospitals in Shanghai, Jiangsu Province, and Zhejiang Province. Results: The results of the cross-sectional survey indicated that the mean score for the patient engagement behavior was (4.146 ± 0.496), and the mean score for the willingness was (4.387 ± 0.459). The level of patient engagement behavior was inferior to their willingness to be involved in healthcare (t = 14.928, P < 0.01). The influencing mechanism model of patient engagement in healthcare was constructed by the path analysis. The path analysis revealed that patient attitude toward engagement, patients’ perception of facilitation of patient engagement and health literacy played direct prediction on the patients’ willingness of engagement, and standard estimated values of path coefficient were 0.341, 0.199, 0.291, respectively. Patients’ trust in physician and the willingness of engagement played direct prediction on the patient engagement, and standard estimated values of path coefficient were 0.211, 0.641, respectively. Patient attitude toward engagement, patients’ perception of facilitation and health literacy played indirect prediction on patient engagement, and standard estimated values of path coefficient were 0.219, 0.128, 0.187, respectively. Conclusions: Patients engagement behavior did not match their willingness to be involved in healthcare. The influencing mechanism model of patient engagement in healthcare was constructed. Patient attitude toward engagement, patients’ perception of facilitation of engagement and health literacy posed indirect positive influence on patient engagement through the patients’ willingness of engagement. Patients’ trust in physician and the willingness of engagement had direct positive influence on the patient engagement. Patient attitude toward engagement, patients’ perception of physician facilitation of engagement and health literacy were the factors influencing the patients’ willingness of engagement. The results of this study provided valuable evidence on guiding the development of strategies for promoting patient rational and meaningful engagement in healthcare.Keywords: healthcare, patient engagement, influencing factor, the mechanism
Procedia PDF Downloads 156356 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 154355 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport
Authors: Surupa Shaw, Debjyoti Banerjee
Abstract:
Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement
Procedia PDF Downloads 318354 Inter-Personal and Inter-Organizational Relationships in Supply Chain Integration: A Resource Orchestration Perspective
Authors: Bill Wang, Paul Childerhouse, Yuanfei Kang
Abstract:
Purpose: The research is to extend resource orchestration theory (ROT) into supply chain management (SCM) area to investigate the dyadic relationships at both individual and organizational levels in supply chain integration (SCI). Also, we try to explore the interaction mechanism between inter-personal relationships (IPRs) and inter-organizational (IORs) during the whole SCI process. Methodology/approach: The research employed an exploratory multiple case study approach of four New Zealand companies. The data was collected via semi-structured interviews with top, middle, and lower level managers and operators from different departments of both suppliers and customers triangulated with company archival data. Findings: The research highlights the important role of both IPRs and IORs in the whole SCI process. Both IPRs and IORs are valuable, inimitable resources but IORs are formal and exterior while IPRs are informal and subordinated. In the initial stage of SCI process, IPRs are seen as key resources antecedents to IOR building while three IPRs dimensions work differently: personal credibility acts as an icebreaker to strengthen the confidence forming IORs, and personal affection acts as a gatekeeper, whilst personal communication expedites the IORs process. In the maintenance and development stage, IORs and IPRs interact each other continuously: good interaction between IPRs and IORs can facilitate SCI process while the bad interaction between IPRs can damage the SCI process. On the other hand, during the life-cycle of SCI process, IPRs can facilitate the formation, development of IORs while IORs development can cultivate the ties of IPRs. Out of the three dimensions of IPRs, Personal communication plays a more important role to develop IORs than personal credibility and personal affection. Originality/value: This research contributes to ROT in supply chain management literature by highlighting the interaction of IPRs and IORs in SCI. The intangible resources and capabilities of three dimensions of IPRs need to be orchestrated and nurtured to achieve efficient and effective IORs in SCI. Also, IPRs and IORs need to be orchestrated in terms of breadth, depth, and life-cycle of whole SCI process. Our study provides further insight into the rarely explored inter-personal level of SCI. Managerial implications: Our research provides top management with further evidence of the significance roles of IPRs at different levels when working with trading partners. This highlights the need to actively manage and develop these soft IPRs skills as an intangible competitive resource. Further, the research identifies when staff with specific skills and connections should be utilized during the different stages of building and maintaining inter-organizational ties. More importantly, top management needs to orchestrate and balance the resources of IPRs and IORs.Keywords: case study, inter-organizational relationships, inter-personal relationships, resource orchestration, supply chain integration
Procedia PDF Downloads 233353 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates
Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau
Abstract:
There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.Keywords: duration estimates, long durations, passage of time judgements, task demands
Procedia PDF Downloads 130352 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN
Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran
Abstract:
In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS
Procedia PDF Downloads 36351 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser
Authors: Edoardo Schinco
Abstract:
Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.Keywords: Althusser, enlightenment, Gramsci, ideology
Procedia PDF Downloads 199350 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 69349 A Markov Model for the Elderly Disability Transition and Related Factors in China
Authors: Huimin Liu, Li Xiang, Yue Liu, Jing Wang
Abstract:
Background: As one of typical case for the developing countries who are stepping into the aging times globally, more and more older people in China might face the problem of which they could not maintain normal life due to the functional disability. While the government take efforts to build long-term care system and further carry out related policies for the core concept, there is still lack of strong evidence to evaluating the profile of disability states in the elderly population and its transition rate. It has been proved that disability is a dynamic condition of the person rather than irreversible so it means possible to intervene timely on them who might be in a risk of severe disability. Objective: The aim of this study was to depict the picture of the disability transferring status of the older people in China, and then find out individual characteristics that change the state of disability to provide theory basis for disability prevention and early intervention among elderly people. Methods: Data for this study came from the 2011 baseline survey and the 2013 follow-up survey of the China Health and Retirement Longitudinal Study (CHARLS). Normal ADL function, 1~2 ADLs disability,3 or above ADLs disability and death were defined from state 1 to state 4. Multi-state Markov model was applied and the four-state homogeneous model with discrete states and discrete times from two visits follow-up data was constructed to explore factors for various progressive stages. We modeled the effect of explanatory variables on the rates of transition by using a proportional intensities model with covariate, such as gender. Result: In the total sample, state 2 constituent ratio is nearly about 17.0%, while state 3 proportion is blow the former, accounting for 8.5%. Moreover, ADL disability statistics difference is not obvious between two years. About half of the state 2 in 2011 improved to become normal in 2013 even though they get elder. However, state 3 transferred into the proportion of death increased obviously, closed to the proportion back to state 2 or normal functions. From the estimated intensities, we see the older people are eleven times as likely to develop at 1~2 ADLs disability than dying. After disability onset (state 2), progression to state 3 is 30% more likely than recovery. Once in state 3, a mean of 0.76 years is spent before death or recovery. In this model, a typical person in state 2 has a probability of 0.5 of disability-free one year from now while the moderate disabled or above has a probability of 0.14 being dead. Conclusion: On the long-term care cost considerations, preventive programs for delay the disability progression of the elderly could be adopted based on the current disabled state and main factors of each stage. And in general terms, those focusing elderly individuals who are moderate or above disabled should go first.Keywords: Markov model, elderly people, disability, transition intensity
Procedia PDF Downloads 290348 Mesoporous BiVO4 Thin Films as Efficient Visible Light Driven Photocatalyst
Authors: Karolina Ordon, Sandrine Coste, Malgorzata Makowska-Janusik, Abdelhadi Kassiba
Abstract:
Photocatalytic processes play key role in the production of a new source of energy (as hydrogen), design of self-cleaning surfaces or for the environment preservation. The most challenging task deals with the purification of water distinguished by high efficiency. In the mentioned process, organic pollutants in solutions are decomposed to the simple, non-toxic compounds as H2O and CO2. The most known photocatalytic materials are ZnO, CdS and TiO2 semiconductors with a particular involvement of TiO2 as an efficient photocatalysts even with a high band gap equal to 3.2 eV which exploit only UV radiation from solar emitted spectrum. However, promising material with visible light induced photoactivity was searched through the monoclinic polytype of BiVO4 which has energy gap about 2.4 eV. As required in heterogeneous photocatalysis, the high contact surface is required. Also, BiVO4 as photocatalyst can be optimized by increasing its surface area by achieving the mesoporous structure synthesize. The main goal of the present work consists in the synthesis and characterization of BiVO4 mesoporous thin film. The synthesis method based on sol-gel was carried out using a standard surfactants such as P123 and F127. The thin film was deposited by spin and dip coating method. Then, the structural analysis of the obtained material was performed thanks to X-ray diffraction (XRD) and Raman spectroscopy. The surface of resulting structure was investigated using a scanning electron microscopy (SEM). The computer simulations based on modeling the optical and electronic properties of bulk BiVO4 by using DFT (density functional theory) methodology were carried out. The semiempirical parameterized method PM6 was used to compute the physical properties of BiVO4 nanostructures. The Raman and IR absorption spectra were also measured for synthesized mesoporous material, and the results were compared with the theoretical predictions. The simulations of nanostructured BiVO4 have pointed out the occurrence of quantum confinement for nanosized clusters leading to widening of the band gap. This result overcame the relevance of nanosized objects to harvest wide part of the solar spectrum. Also, a balance was searched experimentally through the mesoporous nature of the films devoted to enhancing the contact surface as required for heterogeneous catalysis without to lower the nanocrystallite size under some critical sizes inducing an increased band gap. The present contribution will discuss the relevant features of the mesoporous films with respect to their photocatalytic responses.Keywords: bismuth vanadate, photocatalysis, thin film, quantum-chemical calculations
Procedia PDF Downloads 323347 Bundling of Transport Flows: Adoption Barriers and Opportunities
Authors: Vandenbroucke Karel, Georges Annabel, Schuurman Dimitri
Abstract:
In the past years, bundling of transport flows, whether or not implemented in an intermodal process, has popped up as a promising concept in the logistics sector. Bundling of transport flows is a process where two or more shippers decide to synergize their shipped goods over a common transport lane. Promoted by the European Commission, several programs have been set up and have shown their benefits. Bundling promises both shippers and logistics service providers economic, societal and ecological benefits. By bundling transport flows and thus reducing truck (or other carrier) capacity, the problems of driver shortage, increased fuel prices, mileage charges and restricted hours of service on the road are solved. In theory, the advantages of bundled transport exceed the drawbacks, however, in practice adoption among shippers remains low. In fact, bundling is mentioned as a disruptive process in the rather traditional logistics sector. In this context, a Belgian company asked iMinds Living Labs to set up a Living Lab research project with the goal to investigate how the uptake of bundling transport flows can be accelerated and to check whether an online data sharing platform can overcome the adoption barriers. The Living Lab research was conducted in 2016 and combined quantitative and qualitative end-user and market research. Concretely, extensive desk research was conducted and combined with insights from expert interviews with four consultants active in the Belgian logistics sector and in-depth interviews with logistics professionals working for shippers (N=10) and LSP’s (N=3). In the article, we present findings which show that there are several factors slowing down the uptake of bundling transport flows. Shippers are hesitant to change how they currently work and they are hesitant to work together with other shippers. Moreover, several practical challenges impede shippers to work together. We also present some opportunities that can accelerate the adoption of bundling of transport flows. First, it seems that there is not enough support coming from governmental and commercial organizations. Secondly, there is the chicken and the egg problem: too few interested parties will lead to no or very few matching lanes. Shippers are therefore reluctant to partake in these projects because the benefits have not yet been proven. Thirdly, the incentive is not big enough for shippers. Road transport organized by the shipper individually is still seen as the easiest and cheapest solution. A solution for the abovementioned challenges might be found in the online data sharing platform of the Belgian company. The added value of this platform is showing shippers possible matching lanes, without the shippers having to invest time in negotiating and networking with other shippers and running the risk of not finding a match. The interviewed shippers and experts indicated that the online data sharing platform is a very promising concept which could accelerate the uptake of bundling of transport flows.Keywords: adoption barriers, bundling of transport, shippers, transport optimization
Procedia PDF Downloads 200346 Results and Insights from a Developmental Psychology Study on the Presentation of Juvenility in Punk Fanzines
Authors: Marc Dietrich
Abstract:
Youth cultures like Punk as much as media relevant to the specific scenes associated with them offer ample opportunity for young people or juvenile adults to construct their personal identities. However, developmental psychology has largely neglected such identity construction processes during the last decades. Such was not always the case: Early developmental psychologists intensely studied youth cultures and their meaningful objects and media in the early 20th century but lost interest when cultural studies and the social sciences occupied the field after World War II. Our project Constructions of Juvenility and Generation(ality), funded by the German Federal Ministry for Education and Research, reintegrates the study of youth cultures and their meaningful objects and media in a developmental psychology perspective. We present an empirical study of the ways in which youth, juvenility, and generation (ality) are constructed and negotiated in underground media like punk fanzines (a portmanteau of fan and magazine), including both semantic and aesthetic aspects of these construction processes within punk culture. The fanzine sample was accessed by the theoretical sampling strategy typical for GTM studies. Acknowledging fanzines as artful self-produced media by scene members for scene members, we conceptualize them as authentic documents of scene norms and values. Drawing on an analysis of both text and (cover) images in Punk fanzines published in Germany (and within a sample dating from 1981 until 2015) using a novel Visual Grounded Theory approach, we found that: a) Juvenility is a highly contested concept in punk culture. Its semantic quality and valuation varies with the perspectives present within the culture (e.g. embryo punks versus older punks); b) Juvenility is constructed as having energy and being socio-critical that does not depend on biological age; c) Juvenility is regarded not an ideal per se in German Punk culture; Punk culture constructs old age in a largely positive way (e.g., as marker of being real and a historical innovator); d) Juvenility is constructed as a habit that should be kept for life as it is constantly adapted to individual biographical trajectories like specific job situations or having a family. Consequently, identity negotiation as documented in the zines attempts to balance subculturally driven perspectives on life and society with the pragmatic requirements of a bourgeois life. The proposed paper will present the main results of this large-scale study of punk fanzines and show how developmental psychology perspectives as represented in the novel methodology applied in it can advance the study of youth cultures.Keywords: construction of juvenility, developmental psychology, visual GTM, youth culture, fanzines
Procedia PDF Downloads 292345 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 110344 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model
Authors: S.I.Mukhin, S. Seidov, A. Mukherjee
Abstract:
The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity
Procedia PDF Downloads 134343 Polypyrrole as Bifunctional Materials for Advanced Li-S Batteries
Authors: Fang Li, Jiazhao Wang, Jianmin Ma
Abstract:
The practical application of Li-S batteries is hampered due to poor cycling stability caused by electrolyte-dissolved lithium polysulfides. Dual functionalities such as strong chemical adsorption stability and high conductivity are highly desired for an ideal host material for a sulfur-based cathode. Polypyrrole (PPy), as a conductive polymer, was widely studied as matrixes for sulfur cathode due to its high conductivity and strong chemical interaction with soluble polysulfides. Thus, a novel cathode structure consisting of a free-standing sulfur-polypyrrole cathode and a polypyrrole coated separator was designed for flexible Li-S batteries. The PPy materials show strong interaction with dissoluble polysulfides, which could suppress the shuttle effect and improve the cycling stability. In addition, the synthesized PPy film with a rough surface acts as a current collector, which improves the adhesion of sulfur materials and restrain the volume expansion, enhancing the structural stability during the cycling process. For further enhancing the cycling stability, a PPy coated separator was also applied, which could make polysulfides into the cathode side to alleviate the shuttle effect. Moreover, the PPy layer coated on commercial separator is much lighter than other reported interlayers. A soft-packaged flexible Li-S battery has been designed and fabricated for testing the practical application of the designed cathode and separator, which could power a device consisting of 24 light-emitting diode (LED) lights. Moreover, the soft-packaged flexible battery can still show relatively stable cycling performance after repeated bending, indicating the potential application in flexible batteries. A novel vapor phase deposition method was also applied to prepare uniform polypyrrole layer coated sulfur/graphene aerogel composite. The polypyrrole layer simultaneously acts as host and adsorbent for efficient suppression of polysulfides dissolution through strong chemical interaction. The density functional theory (DFT) calculations reveal that the polypyrrole could trap lithium polysulfides through stronger bonding energy. In addition, the deflation of sulfur/graphene hydrogel during the vapor phase deposition process enhances the contact of sulfur with matrixes, resulting in high sulfur utilization and good rate capability. As a result, the synthesized polypyrrole coated sulfur/graphene aerogel composite delivers a specific discharge capacity of 1167 mAh g⁻¹ and 409.1 mAh g⁻¹ at 0.2 C and 5 C respectively. The capacity can maintain at 698 mAh g⁻¹ at 0.5 C after 500 cycles, showing an ultra-slow decay rate of 0.03% per cycle.Keywords: polypyrrole, strong chemical interaction, long-term stability, Li-S batteries
Procedia PDF Downloads 140342 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition
Authors: M. Beusink, E. W. C. Coenen
Abstract:
The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures
Procedia PDF Downloads 233341 Knowledge Management Processes as a Driver of Knowledge-Worker Performance in Public Health Sector of Pakistan
Authors: Shahid Razzaq
Abstract:
The governments around the globe have started taking into considerations the knowledge management dynamics while formulating, implementing, and evaluating the strategies, with or without the conscious realization, for the different public sector organizations and public policy developments. Health Department of Punjab province in Pakistan is striving to deliver quality healthcare services to the community through an efficient and effective service delivery system. Despite of this struggle some employee performance issues yet exists in the form of challenge to government. To overcome these issues department took several steps including HR strategies, use of technologies and focus of hard issues. Consequently, this study was attempted to highlight the importance of soft issue that is knowledge management in its true essence to tackle their performance issues. Knowledge management in public sector is quite an ignored area in the knowledge management-a growing multidisciplinary research discipline. Knowledge-based view of the firm theory asserts the knowledge is the most deliberate resource that can result in competitive advantage for an organization over the other competing organizations. In the context of our study it means for gaining employee performance, organizations have to increase the heterogeneous knowledge bases. The study uses the cross-sectional and quantitative research design. The data is collected from the knowledge workers of Health Department of Punjab, the biggest province of Pakistan. A total of 341 sample size is achieved. The SmartPLS 3 Version 2.6 is used for analyzing the data. The data examination revealed that knowledge management processes has a strong impact on knowledge worker performance. All hypotheses are accepted according to the results. Therefore, it can be summed up that to increase the employee performance knowledge management activities should be implemented. Health Department within province of Punjab introduces the knowledge management infrastructure and systems to make effective availability of knowledge for the service staff. This knowledge management infrastructure resulted in an increase in the knowledge management process in different remote hospitals, basic health units and care centers which resulted in greater service provisions to public. This study is to have theoretical and practical significances. In terms of theoretical contribution, this study is to establish the relationship between knowledge management and performance for the first time. In case of the practical contribution, this study is to give an insight to public sector organizations and government about role of knowledge management in employ performance. Therefore, public policymakers are strongly advised to implement the activities of knowledge management for enhancing the performance of knowledge workers. The current research validated the substantial role of knowledge management in persuading and creating employee arrogances and behavioral objectives. To the best of authors’ knowledge, this study contribute to the impact of knowledge management on employee performance as its originality.Keywords: employee performance, knowledge management, public sector, soft issues
Procedia PDF Downloads 141340 Self-Education, Recognition and Well-Being Insights into Qualitative-Reconstructive Educational Research on the Value of Non-formal Education in the Adolescence
Authors: Sandra Biewers Grimm
Abstract:
International studies such as Pisa have shown an increasing social inequality in the education system, which is determined in particular by social origin and migration status. This is especially the case in the Luxembourg school system, which creates challenges for many young people due to the multilingualism in the country. While the international and also the national debate on education in the immediate aftermath of the publications of the Pisa results mainly focused on the further development of school-based learning venues and formal educational processes, it initially remained largely unclear what role exactly out-of-school learning venues and non-formal and informal learning processes could play in this further development. This has changed in the meantime. Both in the political discourses and in the scientific disciplines, those voices have become louder that draw attention to the important educational function and the enormous educational potential of out-of-school learning places as a response to the crisis of the formal education system and more than this. Youth work as an actor and approach of non-formal education is particularly in demand here. Due to its principles of self-education, participation and openness, it is considered to have a special potential in supporting the acquisition of important key competencies. In this context, the study "Educational experiences in non-formal settings" at CCY takes a differentiated look behind the scenes of education-oriented youth work and describes on the basis of empirical data what and how young people learn in youth centers and which significance they attach to these educational experiences for their subjective life situation. In this sense, the aim of the study is to reconstruct the subjective educational experiences of young people in Open Youth Work as well as to explore the value that these experiences have for young people. In doing so, it enables scientifically founded conclusions about the educational potential of youth work from the user's perspective. Initially, the study focuses on defining the concept of education in the context of non-formal education and thus sets a theoretical framework for the empirical analysis. This socio-educational term of education differs from the relevant conception of education in curricular, formal education as the acquisition of knowledge. It also differs from the operationalization of education as competence, or the differentiation into cultural, social and personal or into factual, social or methodological competence, which is often used in the European context and which has long been interpreted as a "social science reading of the question of education" (XX). Now the aim is to define a "broader" concept of education that goes beyond the normative and educational policy dimensions of a "non-formal education" and includes the classical socio-educational dimensions. Furthermore, the study works with different methods of empirical social research: In addition to ethnographic observation and an online survey, group discussions were conducted with the young people. The presentation gives an insight into the context, the methodology and the results of this study.Keywords: non-formal education, youth research, qualitative research, educational theory
Procedia PDF Downloads 163339 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 269338 The Association of Southeast Asian Nations (ASEAN) and the Dynamics of Resistance to Sovereignty Violation: The Case of East Timor (1975-1999)
Authors: Laura Southgate
Abstract:
The Association of Southeast Asian Nations (ASEAN), as well as much of the scholarship on the organisation, celebrates its ability to uphold the principle of regional autonomy, understood as upholding the norm of non-intervention by external powers in regional affairs. Yet, in practice, this has been repeatedly violated. This dichotomy between rhetoric and practice suggests an interesting avenue for further study. The East Timor crisis (1975-1999) has been selected as a case-study to test the dynamics of ASEAN state resistance to sovereignty violation in two distinct timeframes: Indonesia’s initial invasion of the territory in 1975, and the ensuing humanitarian crisis in 1999 which resulted in a UN-mandated, Australian-led peacekeeping intervention force. These time-periods demonstrate variation on the dependent variable. It is necessary to observe covariation in order to derive observations in support of a causal theory. To establish covariation, my independent variable is therefore a continuous variable characterised by variation in convergence of interest. Change of this variable should change the value of the dependent variable, thus establishing causal direction. This paper investigates the history of ASEAN’s relationship to the norm of non-intervention. It offers an alternative understanding of ASEAN’s history, written in terms of the relationship between a key ASEAN state, which I call a ‘vanguard state’, and selected external powers. This paper will consider when ASEAN resistance to sovereignty violation has succeeded, and when it has failed. It will contend that variation in outcomes associated with vanguard state resistance to sovereignty violation can be best explained by levels of interest convergence between the ASEAN vanguard state and designated external actors. Evidence will be provided to support the hypothesis that in 1999, ASEAN’s failure to resist violations to the sovereignty of Indonesia was a consequence of low interest convergence between Indonesia and the external powers. Conversely, in 1975, ASEAN’s ability to resist violations to the sovereignty of Indonesia was a consequence of high interest convergence between Indonesia and the external powers. As the vanguard state, Indonesia was able to apply pressure on the ASEAN states and obtain unanimous support for Indonesia’s East Timor policy in 1975 and 1999. However, the key factor explaining the variance in outcomes in both time periods resides in the critical role played by external actors. This view represents a serious challenge to much of the existing scholarship that emphasises ASEAN’s ability to defend regional autonomy. As these cases attempt to show, ASEAN autonomy is much more contingent than portrayed in the existing literature.Keywords: ASEAN, east timor, intervention, sovereignty
Procedia PDF Downloads 358337 The Structuring of Economic of Brazilian Innovation and the Institutional Proposal to the Legal Management for Global Conformity to Treat the Technological Risks
Authors: Daniela Pellin, Wilson Engelmann
Abstract:
Brazil has sought to accelerate your development through technology and innovation as a response to the global influences, which has received in internal management practices. For this, it had edited the Brazilian Law of Innovation 13.243/2016. However observing the Law overestimated economic aspects the respective application will not consider the stakeholders and the technological risks because there is no legal treatment. The economic exploitation and the technological risks must be controlled by limits of democratic system to find better social development to contribute with the economics agents for making decision to conform with global directions. The research understands this is a problem to face given the social particularities of the country because there has been the literal import of the North American Triple Helix Theory consolidated in developed countries and the negative consequences when applied in developing countries. Because of this symptomatic scenario, it is necessary to create adjustment to conduct the management of the law besides social democratic interests to increase the country development. For this, therefore, the Government will have to adopt some conducts promoting side by side with universities, civil society and companies, informational transparency, catch of partnerships, create a Confort Letter document for preparation to ensure the operation, joint elaboration of a Manual of Good Practices, make accountability and data dissemination. Also the Universities must promote informational transparency, drawing up partnership contracts and generating revenue, development of information. In addition, the civil society must do data analysis about proposals received for discussing to give opinion related. At the end, companies have to give public and transparent information about investments and economic benefits, risks and innovation manufactured. The research intends as a general objective to demonstrate that the efficiency of the propeller deployment will be possible if the innovative decision-making process goes through the institutional logic. As specific objectives, the American influence must undergo some modifications to better suit the economic-legal incentives to potentiate the development of the social system. The hypothesis points to institutional model for application to the legal system can be elaborated based on emerging characteristics of the country, in such a way that technological risks can be foreseen and there will be global conformity with attention to the full development of society as proposed by the researchers.The method of approach will be the systemic-constructivist with bibliographical review, data collection and analysis with the construction of the institutional and democratic model for the management of the Law.Keywords: development, governance of law, institutionalization, triple helix
Procedia PDF Downloads 140336 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada
Authors: Amy Buckley
Abstract:
The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism
Procedia PDF Downloads 76335 The Re-Emergence of Russia Foreign Policy (Case Study: Middle East)
Authors: Maryam Azish
Abstract:
Russia, as an emerging global player in recent years, has projected a special place in the Middle East. Despite all the challenges it has faced over the years, it has always considered its presence in various fields with a strategy that has defined its maneuvering power as a level of competition and even confrontation with the United States. Therefore, its current approach is considered important as an influential actor in the Middle East. After the collapse of the Soviet Union, when the Russians withdrew completely from the Middle East, the American scene remained almost unrivaled by the Americans. With the start of the US-led war in Iraq and Afghanistan and the subsequent developments that led to the US military and political defeat, a new chapter in regional security was created in which ISIL and Taliban terrorism went along with the Arab Spring to destabilize the Middle East. Because of this, the Americans took every opportunity to strengthen their military presence. Iraq, Syria and Afghanistan have always been the three areas where terrorism was shaped, and the countries of the region have each reacted to this evil phenomenon accordingly. The West dealt with this phenomenon on a case-by-case basis in the general circumstances that created the fluid situation in the Arab countries and the region. Russian President Vladimir Putin accused the US of falling asleep in the face of ISIS and terrorism in Syria. In fact, this was an opportunity for the Russians to revive their presence in Syria. This article suggests that utilizing the recognition policy along with the constructivism theory will offer a better knowledge of Russia’s endeavors to endorse its international position. Accordingly, Russia’s distinctiveness and its ambitions for a situation of great power have played a vital role in shaping national interests and, subsequently, in foreign policy, in Putin's era in particular. The focal claim of the paper is that scrutinize Russia’s foreign policy with realistic methods cannot be attained. Consequently, with an aim to fill the prevailing vacuum, this study exploits the politics of acknowledgment in the context of constructivism to examine Russia’s foreign policy in the Middle East. The results of this paper show that the key aim of Russian foreign policy discourse, accompanied by increasing power and wealth, is to recognize and reinstate the position of great power in the universal system. The Syrian crisis has created an opportunity for Russia to unite its position in the developing global and regional order after ages of dynamic and prevalent existence in the Middle East as well as contradicting US unilateralism. In the meantime, the writer thinks that the question of identifying Russia’s position in the global system by the West has played a foremost role in serving its national interests.Keywords: constructivism, foreign Policy, middle East, Russia, regionalism
Procedia PDF Downloads 149334 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times
Authors: John Dimopoulos
Abstract:
This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.Keywords: design, hypermodernity, object-oriented ontology, weapon-being
Procedia PDF Downloads 152333 Densities and Volumetric Properties of {Difurylmethane + [(C5 – C8) N-Alkane or an Amide]} Binary Systems at 293.15, 298.15 and 303.15 K: Modelling Excess Molar Volumes by Prigogine-Flory-Patterson Theory
Authors: Belcher Fulele, W. A. A. Ddamba
Abstract:
Study of solvent systems contributes to the understanding of intermolecular interactions that occur in binary mixtures. These interactions involves among others strong dipole-dipole interactions and weak van de Waals interactions which are of significant application in pharmaceuticals, solvent extractions, design of reactors and solvent handling and storage processes. Binary mixtures of solvents can thus be used as a model to interpret thermodynamic behavior that occur in a real solution mixture. Densities of pure DFM, n-alkanes (n-pentane, n-hexane, n-heptane and n-octane) and amides (N-methylformamide, N-ethylformamide, N,N-dimethylformamide and N,N-dimethylacetamide) as well as their [DFM + ((C5-C8) n-alkane or amide)] binary mixtures over the entire composition range, have been reported at temperature 293.15, 298.15 and 303.15 K and atmospheric pressure. These data has been used to derive the thermodynamic properties: the excess molar volume of solution, apparent molar volumes, excess partial molar volumes, limiting excess partial molar volumes, limiting partial molar volumes of each component of a binary mixture. The results are discussed in terms of possible intermolecular interactions and structural effects that occur in the binary mixtures. The variation of excess molar volume with DFM composition for the [DFM + (C5-C7) n-alkane] binary mixture exhibit a sigmoidal behavior while for the [DFM + n-octane] binary system, positive deviation of excess molar volume function was observed over the entire composition range. For each of the [DFM + (C5-C8) n-alkane] binary mixture, the excess molar volume exhibited a fall with increase in temperature. The excess molar volume for each of [DFM + (NMF or NEF or DMF or DMA)] binary system was negative over the entire DFM composition at each of the three temperatures investigated. The negative deviations in excess molar volume values follow the order: DMA > DMF > NEF > NMF. Increase in temperature has a greater effect on component self-association than it has on complex formation between molecules of components in [DFM + (NMF or NEF or DMF or DMA)] binary mixture which shifts complex formation equilibrium towards complex to give a drop in excess molar volume with increase in temperature. The Prigogine-Flory-Patterson model has been applied at 298.15 K and reveals that the free volume is the most important contributing term to the excess experimental molar volume data for [DFM + (n-pentane or n-octane)] binary system. For [DFM + (NMF or DMF or DMA)] binary mixture, the interactional term and characteristic pressure term contributions are the most important contributing terms in describing the sign of experimental excess molar volume. The mixture systems contributed to the understanding of interactions of polar solvents with proteins (amides) with non-polar solvents (alkanes) in biological systems.Keywords: alkanes, amides, excess thermodynamic parameters, Prigogine-Flory-Patterson model
Procedia PDF Downloads 355332 A Standard-Based Competency Evaluation Scale for Preparing Qualified Adapted Physical Education Teachers
Authors: Jiabei Zhang
Abstract:
Although adapted physical education (APE) teacher preparation programs are available in the nation, a consistent standards-based competency evaluation scale for preparing of qualified personnel for teaching children with disabilities in APE cannot be identified in the literature. The purpose of this study was to develop a standard-based competency evaluation scale for assessing qualifications for teaching children with disabilities in APE. Standard-based competencies were reviewed and identified based on research evidence documented as effective in teaching children with disabilities in APE. A standard-based competency scale was developed for assessing qualifications for teaching children with disabilities in APE. This scale included 20 standard-based competencies and a 4-point Likert-type scale for each standard-based competency. The first standard-based competency is knowledgeable of the causes of disabilities and their effects. The second competency is the ability to assess physical education skills of children with disabilities. The third competency is able to collaborate with other personnel. The fourth competency is knowledgeable of the measurement and evaluation. The fifth competency is to understand federal and state laws. The sixth competency is knowledgeable of the unique characteristics of all learners. The seventh competency is the ability to write in behavioral terms for objectives. The eighth competency is knowledgeable of developmental characteristics. The ninth competency is knowledgeable of normal and abnormal motor behaviors. The tenth competency is the ability to analyze and adapt the physical education curriculums. The eleventh competency is to understand the history and the philosophy of physical education. The twelfth competency is to understand curriculum theory and development. The thirteenth competency is the ability to utilize instructional designs and plans. The fourteenth competency is the ability to create and implement physical activities. The fifteenth competency is the ability to utilize technology applications. The sixteenth competency is to understand the value of program evaluation. The seventeenth competency is to understand professional standards. The eighteenth competency is knowledgeable of the focused instruction and individualized interventions. The nineteenth competency is able to complete a research project independently. The twentieth competency is to teach children with disabilities in APE independently. The 4-point Likert-type scale ranges from 1 for incompetent to 4 for highly competent. This scale is used for assessing if one completing all course works is eligible for receiving an endorsement for teaching children with disabilities in APE, which is completed based on the grades earned on three courses targeted for each standard-based competency. A mean grade received in three courses primarily addressing a standard-based competency will be marked on a competency level in the above scale. The level 4 is marked for a mean grade of A one receives over three courses, the level 3 for a mean grade of B over three courses, and so on. One should receive a mean score of 3 (competent level) or higher (highly competent) across 19 standard-based competencies after completing all courses specified for receiving an endorsement for teaching children with disabilities in APE. The validity, reliability, and objectivity of this standard-based competency evaluation scale are to be documented.Keywords: evaluation scale, teacher preparation, adapted physical education teachers, and children with disabilities
Procedia PDF Downloads 116331 From Theory to Practice: An Iterative Design Process in Implementing English Medium Instruction in Higher Education
Authors: Linda Weinberg, Miriam Symon
Abstract:
While few institutions of higher education in Israel offer international programs taught entirely in English, many Israeli students today can study at least one content course taught in English during their degree program. In particular, with the growth of international partnerships and opportunities for student mobility, English medium instruction is a growing phenomenon. There are however no official guidelines in Israel for how to develop and implement content courses in English and no training to help lecturers prepare for teaching their materials in a foreign language. Furthermore, the implications for the students and the nature of the courses themselves have not been sufficiently considered. In addition, the institution must have lecturers who are able to teach these courses effectively in English. An international project funded by the European Union addresses these issues and a set of guidelines which provide guidance for lecturers in adapting their courses for delivery in English have been developed. A train-the-trainer approach is adopted in order to cascade knowledge and experience in English medium instruction from experts to language teachers and on to content teachers thus maximizing the scope of professional development. To accompany training, a model English medium course has been created which serves the dual purpose of highlighting alternatives to the frontal lecture while integrating language learning objectives with content goals. This course can also be used as a standalone content course. The development of the guidelines and of the course utilized backwards, forwards and central design in an iterative process. The goals for combined language and content outcomes were identified first after which a suitable framework for achieving these goals was constructed. The assessment procedures evolved through collaboration between content and language specialists and subsequently were put into action during a piloting phase. Feedback from the piloting teachers and from the students highlight the need for clear channels of communication to encourage frank and honest discussion of expectations versus reality. While much of what goes on in the English medium classroom requires no better teaching skills than are required in any classroom, the understanding of students' abilities in achieving reasonable learning outcomes in a foreign language must be rationalized and accommodated within the course design. Concomitantly, preparatory language classes for students must be able to adapt to prepare students for specific language and cognitive skills and activities that courses conducted in English require. This paper presents findings from the implementation of a purpose-designed English medium instruction course arrived at through an iterative backwards, forwards and central design process utilizing feedback from students and lecturers alike leading to suggested guidelines for English medium instruction in higher education.Keywords: English medium instruction, higher education, iterative design process, train-the-trainer
Procedia PDF Downloads 300330 Increased Envy and Schadenfreude in Parents of Newborns
Authors: Ana-María Gómez-Carvajal, Hernando Santamaría-García, Mateo Bernal, Mario Valderrama, Daniela Lizarazo, Juliana Restrepo, María Fernanda Barreto, Angélica Parra, Paula Torres, Diana Matallana, Jaime Silva, José Santamaría-García, Sandra Baez
Abstract:
Higher levels of oxytocin are associated with better performance on social cognition tasks. However, higher levels of oxytocin have also been associated with increased levels of envy and schadenfreude. Considering these antecedents, this study aims to explore social emotions (i.e., envy and schadenfreude) and other components of social cognition (i.e. ToM and empathy), in women in the puerperal period and their respective partners, compared to a control group of men and women without children or partners. Control women should be in the luteal phase of the menstrual cycle or taking oral contraceptives as they allow oxytocin levels to remain stable. We selected this population since increased levels of oxytocin are present in both mothers and fathers of newborn babies. Both groups were matched by age, sex, and education level. Twenty-two parents of newborns (11 women, 11 men) and 15 controls (8 women, 7 men) performed an experimental task designed to trigger schadenfreude and envy. In this task, each participant was shown a real-life photograph and a description of two target characters matched in age and gender with the participant. The task comprised two experimental blocks. In the first block, participants read 15 sentences describing fortunate events involving either character. After reading each sentence, participants rated the event in terms of how much envy they felt for the character (1=no envy, 9=extreme envy). In the second block, participants read and reported the intensity of their pleasure (schadenfreude, 1=no pleasure, 9=extreme pleasure) in response to 15 unfortunate events happening to the characters. Five neutral events were included in each block. Moreover, participants were assessed with ToM and empathy tests. Potential confounding variables such as general cognitive functioning, stress levels, hours of sleep and depression symptoms were also measured. Results showed that parents of newborns showed increased levels of envy and schadenfreude. These effects are not explained by any confounding factor. Moreover, no significant differences were found in ToM or empathy tests. Our results offer unprecedented evidence of specific differences in envy and schadenfreude levels in parents of newborns. Our findings support previous studies showing a negative relationship between oxytocin levels and negative social emotions. Further studies should assess the direct relationship between oxytocin levels in parents of newborns and the performance in social emotions tasks.Keywords: envy, empathy, oxytocin, schadenfreude, social emotions, theory of mind
Procedia PDF Downloads 317329 An Exploration of the Experiences of Women in Polygamous Marriages: A Case Study of Matizha Village, Masvingo, Zimbabwe
Authors: Flora Takayindisa, Tsoaledi Thobejane, Thizwilondi Mudau
Abstract:
This study highlights what people in polygamous marriages face on a daily basis. It argues that there are more disadvantages for women in polygamous marriages than their counterparts in monogamous relationships. The study further suggests that the patriarchal power structure seems to take a powerful and effective role on polygamous marriages in our societies, particularly in Zimbabwe where this study took place. The study explored the intricacies of polygamous marriages and how these dominances can be resolved. The research is therefore presented through the ‘lived realities’ of the affected women in polygamous marriages in Gutu District located in Masvingo Province of Zimbabwe. Polygamous marriages are practised in different societies. Some women who are practising a polygamous lifestyle are emotionally and physically abused in their relationships. Evidence also suggests children from polygamous marriages also suffer psychologically when their fathers take other wives. Relationships within the family are very difficult because of the husband’s seeming favouritism for one wife. Children are mostly affected by disputes between co-wives and they often lack quality time with their fathers. There are mixed feelings about polygamous marriages. There are some people who condemn it saying inhumane. However, considerations must be made of what it might mean to other people who do not have choices of any other form of marriage. The other factor that has to be noted is that polygamous marriages are not always negative. There are some positive things that result from the polygamous marriages. The study was conducted in a village called Matizha. In the study, a qualitative research approach was employed to stimulate awareness of the social, cultural, religious and the effect of economic factors in polygamous marriages. This approach facilitates a unique understanding of the experiences of women in polygamous marriages, their experiences being both negative and positive. The qualitative type of research method enabled the respondents to have an open minded when they were being asked questions. The researcher utilised the feminist theory in the study. The researcher employed guide interviews to acquire information from the participants. The chapter focuses on the participants who took part in the study, how the participants were selected, ethical considerations, data collection, the interview process, the research instruments and the summary. The data was obtained using a guided interview for all the respondents ranging from all ages who are in polygamous marriages. The researcher presented the demographic information of the participants. Thereafter, the researcher presented other aspects of the data collection like social factors, economic factors and also religious affiliation. The conclusions and recommendations are made from the four main themes that came up from the discussions. The recommendations were made for the women, the policies and laws affecting women, and finally, recommendations for future research. It is believed that the overall objectives of the study have been met and research questions have been answered based on the findings of the study discussed.Keywords: co-wives, egalitarianism, experiences, polyandry, polygamy, woman
Procedia PDF Downloads 262