Search results for: hybrid words
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2899

Search results for: hybrid words

199 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery

Authors: Mark Jackson

Abstract:

Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.

Keywords: policing, reactive, proactive, models, efficacy

Procedia PDF Downloads 459
198 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 162
197 Fashion Utopias: The Role of Fashion Exhibitions and Fashion Archives to Defining (and Stimulating) Possible Future Fashion Landscapes

Authors: Vittorio Linfante

Abstract:

Utopìa is a term that, since its first appearance in 1516, in Tommaso Moro’s work, has taken on different meanings and forms in various fields: social studies, politics, art, creativity, and design. The utopias, although of short duration and in their apparent impossibility, have been able to give a shape to the future, laying the foundations for our present and the future of the next generations. The Twentieth century was the historical period crossed by many changes, and it saw the most significant number of utopias not only social, political, and scientific but also artistic, architectural, in design, communication, and, last but not least, in fashion. Over the years, fashion has been able to interpret various utopistic impulses giving form to the most futuristic visions. From the Manifesto del Vestito by Giacomo Balla, through the functional experiments that led to the Tuta by Thayath and the Varst by Aleksandr Rodčenko and Varvara Stepanova, through the Space Age visions of Rudi Gernreich, Paco Rabanne and Pierre Cardin, and the Archizoom’s political actions and their fashion project Vestirsi è facile. Experiments that have continued to the present days through the (sometimes) excessive visions of Hussein Chalayan, Alexander McQueen, and Gareth Pugh or those that are more anchored to the market (but no fewer innovative and visionaries) by Prada, Chanel, and Raf Simmons. If, as Bauman states, it is true that we have entered in a phase of Retrotopia characterized by the inability to think about new forms of the future; it is necessary, more than ever, to redefine the role of history, of its narration and its mise en scène, within the contemporary creative process. A process that increasingly requires an in-depth knowledge of the past for the definition of a renewed discourse about design processes. A discourse in which words like archive, exhibition, curating, revival, vintage, and costume take on new meanings. The paper aims to investigate–through case studies, research, and professional projects–the renewed role of curating and preserving fashion artefacts. A renewed role that–in an era of Retrotopia–museums, exhibitions, and archives can (and must) assume, to contribute to the definition of new design paradigms, capable of overcoming the traditional categories of revival or costume in favour of a more contemporary “mash-up” approach. Mash-up in which past and present, craftsmanship and new technologies, revival and experimentation merge seamlessly. In this perspective, dresses (as well as fashion accessories) should be considered not only as finished products but as artefacts capable of talking about the past and of producing unpublished new stories at the same time. Archives, exhibitions (academic and not), and museums thus become powerful sources of inspiration for fashion: places and projects capable of generating innovation, becoming active protagonists of the contemporary fashion design processes.

Keywords: heritage, history, costume and fashion interface, performance, language, design research

Procedia PDF Downloads 96
196 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 197
195 Evaluation of the Risk Factors on the Incidence of Adjacent Segment Degeneration After Anterior Neck Discectomy and Fusion

Authors: Sayyed Mostafa Ahmadi, Neda Raeesi

Abstract:

Background and Objectives: Cervical spondylosis is a common problem that affects the adult spine and is the most common cause of radiculopathy and myelopathy in older patients. Anterior discectomy and fusion is a well-known technique in degenerative cervical disc disease. However, one of the late undesirable complications is adjacent disc degeneration, which affects about 91% of patients in ten years. Many factors can be effective in causing this complication, but some are still debatable. Discovering these risk factors and eliminating them can improve the quality of life. Methods: This is a retrospective cohort study. All patients who underwent anterior discectomy and fusion surgery in the neurosurgery ward of Imam Khomeini Hospital between 2013 and 2016 were evaluated. Their demographic information was collected. All patients were visited and examined for radiculopathy, myelopathy, and muscular force. At the same visit, all patients were asked to have a facelift, and neck profile, as well as a neck MRI(General Tesla 3). Preoperative graphs were used to measure the diameter of the cervical canal(Pavlov ratio) and to evaluate sagittal alignment(Cobb Angle). Preoperative MRI of patients was reviewed for anterior and posterior longitudinal ligament calcification. Result: In this study, 57 patients were studied. The mean age of patients was 50.63 years, and 49.1% were male. Only 3.5% of patients had anterior and posterior longitudinal ligament calcification. Symptomatic ASD was observed in 26.6%. The X-rays and MRIs showed evidence of 80.7% radiological ASD. Among patients who underwent one-level surgery, 20% had symptomatic ASD, but among patients who underwent two-level surgery, the rate of ASD was 50%.In other words, the higher the number of surfaces that are operated and fused, the higher the probability of symptomatic ASD(P-value <0.05). The X-rays and MRIs showed 80.7% of radiological ASD. Among patients who underwent surgery at one level, 78% had radiological ASD, and this number was 92% among patients who underwent two-level surgery(P-value> 0.05). Demographic variables such as age, sex, height, weight, and BMI did not have a significant effect on the incidence of radiological ASD(P-value> 0.05), but sex and height were two influential factors on symptomatic ASD(P-value <0.05). Other related variables such as family history, smoking and exercise also have no significant effect(P-value> 0.05). Radiographic variables such as Pavlov ratio and sagittal alignment were also unaffected by the incidence of radiological and symptomatic ASD(P-value> 0.05). The number of surgical surfaces and the incidence of anterior and posterior longitudinal ligament calcification before surgery also had no statistically significant effect(P-value> 0.05). In the study of the ability of the neck to move in different directions, none of these variables are statistically significant in the two groups with radiological and symptomatic ASD and the non-affected group(P-value> 0.05). Conclusion: According to the findings of this study, this disease is considered to be a multifactorial disease. The incidence of radiological ASD is much higher than symptomatic ASD (80.7% vs. 26.3%) and sex, height and number of fused surfaces are the only factors influencing the incidence of symptomatic ASD and no variable influences radiological ASD.

Keywords: risk factors, anterior neck disectomy and fusion, adjucent segment degeneration, complication

Procedia PDF Downloads 34
194 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 142
193 Three-Dimensional Model of Leisure Activities: Activity, Relationship, and Expertise

Authors: Taekyun Hur, Yoonyoung Kim, Junkyu Lim

Abstract:

Previous works on leisure activities had been categorizing activities arbitrarily and subjectively while focusing on a single dimension (e.g. active-passive, individual-group). To overcome these problems, this study proposed a Korean leisure activities’ matrix model that considered multidimensional features of leisure activities, which was comprised of 3 main factors and 6 sub factors: (a) Active (physical, mental), (b) Relational (quantity, quality), (c) Expert (entry barrier, possibility of improving). We developed items for measuring the degree of each dimension for every leisure activity. Using the developed Leisure Activities Dimensions (LAD) questionnaire, we investigated the presented dimensions of a total of 78 leisure activities which had been enjoyed by most Koreans recently (e.g. watching movie, taking a walk, watching media). The study sample consisted of 1348 people (726 men, 658 women) ranging in age from teenagers to elderlies in their seventies. This study gathered 60 data for each leisure activity, a total of 4860 data, which were used for statistical analysis. First, this study compared 3-factor model (Activity, Relation, Expertise) fit with 6-factor model (physical activity, mental activity, relational quantity, relational quality, entry barrier, possibility of improving) fit by using confirmatory factor analysis. Based on several goodness-of-fit indicators, the 6-factor model for leisure activities was a better fit for the data. This result indicates that it is adequate to take account of enough dimensions of leisure activities (6-dimensions in our study) to specifically apprehend each leisure attributes. In addition, the 78 leisure activities were cluster-analyzed with the scores calculated based on the 6-factor model, which resulted in 8 leisure activity groups. Cluster 1 (e.g. group sports, group musical activity) and Cluster 5 (e.g. individual sports) had generally higher scores on all dimensions than others, but Cluster 5 had lower relational quantity than Cluster 1. In contrast, Cluster 3 (e.g. SNS, shopping) and Cluster 6 (e.g. playing a lottery, taking a nap) had low scores on a whole, though Cluster 3 showed medium levels of relational quantity and quality. Cluster 2 (e.g. machine operating, handwork/invention) required high expertise and mental activity, but low physical activity. Cluster 4 indicated high mental activity and relational quantity despite low expertise. Cluster 7 (e.g. tour, joining festival) required not only moderate degrees of physical activity and relation, but low expertise. Lastly, Cluster 8 (e.g. meditation, information searching) had the appearance of high mental activity. Even though clusters of our study had a few similarities with preexisting taxonomy of leisure activities, there was clear distinctiveness between them. Unlike the preexisting taxonomy that had been created subjectively, we assorted 78 leisure activities based on objective figures of 6-dimensions. We also could identify that some leisure activities, which used to belong to the same leisure group, were included in different clusters (e.g. filed ball sports, net sports) because of different features. In other words, the results can provide a different perspective on leisure activities research and be helpful for figuring out what various characteristics leisure participants have.

Keywords: leisure, dimensional model, activity, relationship, expertise

Procedia PDF Downloads 281
192 Oligarchic Transitions within the Tunisian Autocratic Authoritarian System and the Struggle for Democratic Transformation: Before and beyond the 2010 Jasmine Revolution

Authors: M. Moncef Khaddar

Abstract:

This paper focuses mainly on a contextualized understanding of ‘autocratic authoritarianism’ in Tunisia without approaching its peculiarities in reference to the ideal type of capitalist-liberal democracy but rather analysing it as a Tunisian ‘civilian dictatorship’. This is reminiscent, to some extent, of the French ‘colonial authoritarianism’ in parallel with the legacy of the traditional formal monarchic absolutism. The Tunisian autocratic political system is here construed as a state manufactured nationalist-populist authoritarianism associated with a de facto presidential single party, two successive autocratic presidents and their subservient autocratic elites who ruled with an iron fist the de-colonialized ‘liberated nation’ that came to be subjected to a large scale oppression and domination under the new Tunisian Republic. The diachronic survey of Tunisia’s autocratic authoritarian system covers the early years of autocracy, under the first autocratic president Bourguiba, 1957-1987, as well as the different stages of its consolidation into a police-security state under the second autocratic president, Ben Ali, 1987-2011. Comparing the policies of authoritarian regimes, within what is identified synchronically as a bi-cephalous autocratic system, entails an in-depth study of the two autocrats, who ruled Tunisia for more than half a century, as modern adaptable autocrats. This is further supported by an exploration of the ruling authoritarian autocratic elites who played a decisive role in shaping the undemocratic state-society relations, under the 1st and 2nd President, and left an indelible mark, structurally and ideologically, on Tunisian polity. Emphasis is also put on the members of the governmental and state-party institutions and apparatuses that kept circulating and recycling from one authoritarian regime to another, and from the first ‘founding’ autocrat to his putschist successor who consolidated authoritarian stability, political continuity and autocratic governance. The reconfiguration of Tunisian political life, in the post-autocratic era, since 2011 will be analysed. This will be scrutinized, especially in light of the unexpected return of many high-profile figures and old guards of the autocratic authoritarian apparatchiks. How and why were, these public figures, from an autocratic era, able to return in a supposedly post-revolutionary moment? Finally, while some continue to celebrate the putative exceptional success of ‘democratic transition’ in Tunisia, within a context of ‘unfinished revolution’, others remain perplexed in the face of a creeping ‘oligarchic transition’ to a ‘hybrid regime’, characterized rather by elites’ reformist tradition than a bottom-up genuine democratic ‘change’. This latter is far from answering the 2010 ordinary people’s ‘uprisings’ and ‘aspirations, for ‘Dignity, Liberty and Social Justice’.

Keywords: authoritarianism, autocracy, democratization, democracy, populism, transition, Tunisia

Procedia PDF Downloads 123
191 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption

Authors: M. François, L. Sigot, C. Vallières

Abstract:

Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.

Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence

Procedia PDF Downloads 216
190 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 140
189 Partisan Agenda Setting in Digital Media World

Authors: Hai L. Tran

Abstract:

Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.

Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization

Procedia PDF Downloads 33
188 Sentiment Analysis on University Students’ Evaluation of Teaching and Their Emotional Engagement

Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís

Abstract:

Teaching practices have been widely studied in relation to students' outcomes, positioning themselves as one of their strongest catalysts and influencing students' emotional experiences. In the higher education context, teachers become even more crucial as many students ground their decisions on which courses to enroll in based on opinions and ratings of teachers from other students. Unfortunately, sometimes universities do not provide the personal, social, and academic stimulation students demand to be actively engaged. To evaluate their teachers, universities often rely on students' evaluations of teaching (SET) collected via Likert scale surveys. Despite its usefulness, such a method has been questioned in terms of validity and reliability. Alternatively, researchers can rely on qualitative answers to open-ended questions. However, the unstructured nature of the answers and a large amount of information obtained requires an overwhelming amount of work. The present work presents an alternative approach to analyse such data: sentiment analysis. To the best of our knowledge, no research before has included results from SA into an explanatory model to test how students' sentiments affect their emotional engagement in class. The sample of the present study included a total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) from the Educational Sciences faculty of a public university in Spain. Data collection took place during the academic year 2021-2022. Students accessed an online questionnaire using a QR code. They were asked to answer the following open-ended question: "If you had to explain to a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?". Sentiment analysis was performed using Microsoft's pre-trained model. The reliability of the measure was estimated between the tool and one of the researchers who coded all answers independently. The Cohen's kappa and the average pairwise percent agreement were estimated with ReCal2. Cohen's kappa was .68, and the agreement reached was 90.8%, both considered satisfactory. To test the hypothesis relations among SA and students' emotional engagement, a structural equation model (SEM) was estimated. Results demonstrated a good fit of the data: RMSEA = .04, SRMR = .03, TLI = .99, CFI = .99. Specifically, the results showed that student’s sentiment regarding their teachers’ teaching positively predicted their emotional engagement (β == .16 [.02, -.30]). In other words, when students' opinion toward their instructors' teaching practices is positive, it is more likely for students to engage emotionally in the subject. Altogether, the results show a promising future for sentiment analysis techniques in the field of education. They suggest the usefulness of this tool when evaluating relations among teaching practices and student outcomes.

Keywords: sentiment analysis, students' evaluation of teaching, structural-equation modelling, emotional engagement

Procedia PDF Downloads 62
187 A Measurement Instrument to Determine Curricula Competency of Licensure Track Graduate Psychotherapy Programs in the United States

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

We developed a novel measurement instrument to assess Knowledge of Educational Programs in Professional Psychotherapy Programs (KEP-PPP or KEP-Triple P) within the United States. The instrument was designed by a Panel of Experts (PoE) that consisted of Licensed Psychotherapists and Medical Care Providers. Licensure track psychotherapy programs are listed in the databases of the Commission on Accreditation for Marriage and Family Therapy Education (COAMFTE); American Psychological Association (APA); Council on Social Work Education (CSWE); and the Council for Accreditation of Counseling & Related Educational Programs (CACREP). A complete list of psychotherapy programs can be obtained from these professional databases, selecting search fields of (All Programs) in (All States). Each program has a Web link that electronically and directly connects to the institutional program, which can be researched using the KEP-Triple P. The 29-item KEP Triple P was designed to consist of six categorical fields; Institutional Type: Degree: Educational Delivery: Accreditation: Coursework Competency: and Special Program Considerations. The KEP-Triple P was designed to determine whether a specific course(s) is offered in licensure track psychotherapy programs. The KEP-Triple P is designed to be modified to assess any part or the entire curriculum of licensure graduate programs. We utilized the KEP-Triple P instrument to study whether a graduate course in Addictions was offered in Marriage and Family Therapy (MFT) programs. Marriage and Family Therapists are likely to commonly encounter patients with Addiction(s) due to the broad treatment scope providing psychotherapy services to individuals, couples and families of all age groups. Our study of 124 MFT programs which concluded at the end of 2016 found that we were able to assess 61 % of programs (N = 76) since 27 % (N = 34) of programs were inaccessible due to broken Web links. From the total of all MFT programs 11 % (N = 14) did not have a published curriculum on their Institutional Web site. From the sample study, we found that 66 % (N = 50) of curricula did not offer a course in Addiction Treatment and that 34 % (N =26) of curricula did require a mandatory course in Addiction Treatment. From our study sample, we determined that 15 % (N = 11) of MFT doctorate programs did not require an Addictions Treatment course and that 1 % (N = 1) did require such a course. We found that 99 % of our study sample offered a Campus based program and 1 % offered a hybrid program with both online and residential components. From the total sample studied, we determined that 84 % of programs would be able to obtain reaccreditation within a five-year period. We recommend that MFT programs initiate procedures to revise curricula to include a required course in Addiction Treatment prior to their next accreditation cycle, to improve the escalating addiction crisis in the United States. This disparity in MFT curricula raises serious ethical and legal consideration for national and Federal stakeholders as well as for patients seeking a competently trained psychotherapist.

Keywords: addiction, competency, curriculum, psychotherapy

Procedia PDF Downloads 129
186 (Re)connecting to the Spirit of the Language: Decolonizing from Eurocentric Indigenous Language Revitalization Methodologies

Authors: Lana Whiskeyjack, Kyle Napier

Abstract:

The Spirit of the language embodies the motivation for indigenous people to connect with the indigenous language of their lineage. While the concept of the spirit of the language is often woven into the discussion by indigenous language revitalizationists, particularly those who are indigenous, there are few tangible terms in academic research conceptually actualizing the term. Through collaborative work with indigenous language speakers, elders, and learners, this research sets out to identify the spirit of the language, the catalysts of disconnection from the spirit of the language, and the sources of reconnection to the spirit of the language. This work fundamentally addresses the terms of engagement around collaboration with indigenous communities, itself inviting a decolonial approach to community outreach and individual relationships. As indigenous researchers, this means beginning, maintain, and closing this work in the ceremony while being transparent with community members in this work and related publishing throughout the project’s duration. Decolonizing this approach also requires maintaining explicit ongoing consent by the elders, knowledge keepers, and community members when handling their ancestral and indigenous knowledge. The handling of this knowledge is regarded in this work as stewardship, both in the handling of digital materials and the handling of ancestral Indigenous knowledge. This work observes recorded conversations in both nêhiyawêwin and English, resulting from 10 semi-structured interviews with fluent nêhiyawêwin speakers as well as three structured dialogue circles with fluent and emerging speakers. The words were transcribed by a speaker fluent in both nêhiyawêwin and English. The results of those interviews were categorized thematically to conceptually actualize the spirit of the language, catalysts of disconnection to thespirit of the language, and community voices methods of reconnection to the spirit of the language. Results of these interviews vastly determine that the spirit of the language is drawn from the land. Although nêhiyawêwin is the focus of this work, Indigenous languages are by nature inherently related to the land. This is further reaffirmed by the Indigenous language learners and speakers who expressed having ancestries and lineages from multiple Indigenous communities. Several other key differences embody this spirit of the language, which include ceremony and spirituality, as well as the semantic worldviews tied to polysynthetic verb-oriented morphophonemics most often found in indigenous languages — and of focus, nêhiyawêwin. The catalysts of disconnection to the spirit of the language are those whose histories have severed connections between Indigenous Peoples and the spirit of their languages or those that have affected relationships with the land, ceremony, and ways of thinking. Results of this research and its literature review have determined the three most ubiquitously damaging interdependent factors, which are catalysts of disconnection from the spirit of the language as colonization, capitalism, and Christianity. As voiced by the Indigenous language learners, this work necessitates addressing means to reconnect to the spirit of the language. Interviewees mentioned that the process of reconnection involves a whole relationship with the land, the practice of reciprocal-relational methodologies for language learning, and indigenous-protected and -governed learning. This work concludes in support of those reconnection methodologies.

Keywords: indigenous language acquisition, indigenous language reclamation, indigenous language revitalization, nêhiyawêwin, spirit of the language

Procedia PDF Downloads 125
185 An Emergentist Defense of Incompatibility between Morally Significant Freedom and Causal Determinism

Authors: Lubos Rojka

Abstract:

The common perception of morally responsible behavior is that it presupposes freedom of choice, and that free decisions and actions are not determined by natural events, but by a person. In other words, the moral agent has the ability and the possibility of doing otherwise when making morally responsible decisions, and natural causal determinism cannot fully account for morally significant freedom. The incompatibility between a person’s morally significant freedom and causal determinism appears to be a natural position. Nevertheless, some of the most influential philosophical theories on moral responsibility are compatibilist or semi-compatibilist, and they exclude the requirement of alternative possibilities, which contradicts the claims of classical incompatibilism. The compatibilists often employ Frankfurt-style thought experiments to prove their theory. The goal of this paper is to examine the role of imaginary Frankfurt-style examples in compatibilist accounts. More specifically, the compatibilist accounts defended by John Martin Fischer and Michael McKenna will be inserted into the broader understanding of a person elaborated by Harry Frankfurt, Robert Kane and Walter Glannon. Deeper analysis reveals that the exclusion of alternative possibilities based on Frankfurt-style examples is problematic and misleading. A more comprehensive account of moral responsibility and morally significant (source) freedom requires higher order complex theories of human will and consciousness, in which rational and self-creative abilities and a real possibility to choose otherwise, at least on some occasions during a lifetime, are necessary. Theoretical moral reasons and their logical relations seem to require a sort of higher-order agent-causal incompatibilism. The ability of theoretical or abstract moral reasoning requires complex (strongly emergent) mental and conscious properties, among which an effective free will, together with first and second-order desires. Such a hierarchical theoretical model unifies reasons-responsiveness, mesh theory and emergentism. It is incompatible with physical causal determinism, because such determinism only allows non-systematic processes that may be hard to predict, but not complex (strongly) emergent systems. An agent’s effective will and conscious reflectivity is the starting point of a morally responsible action, which explains why a decision is 'up to the subject'. A free decision does not always have a complete causal history. This kind of an emergentist source hyper-incompatibilism seems to be the best direction of the search for an adequate explanation of moral responsibility in the traditional (merit-based) sense. Physical causal determinism as a universal theory would exclude morally significant freedom and responsibility in the traditional sense because it would exclude the emergence of and supervenience by the essential complex properties of human consciousness.

Keywords: consciousness, free will, determinism, emergence, moral responsibility

Procedia PDF Downloads 146
184 Boredom in the Classroom: Sentiment Analysis on Teaching Practices and Related Outcomes

Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís

Abstract:

Students’ emotional experiences have been a widely discussed theme among researchers, proving a central role on students’ outcomes. Yet, up to now, far too little attention has been paid to teaching practices that negatively relate with students’ negative emotions in the higher education. The present work aims to examine the relationship between teachers’ teaching practices (i.e., students’ evaluations of teaching and autonomy support), the students’ feelings of boredom and agentic engagement and motivation in the higher education context. To do so, the present study incorporates one of the most popular tools in natural processing language to address students’ evaluations of teaching: sentiment analysis. Whereas most research has focused on the creation of SA models and assessing students’ satisfaction regarding teachers and courses to the author’s best knowledge, no research before has included results from SA into an explanatory model. A total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) participated in the study. Students were enrolled in degree and masters’ studies at the faculty of Education of a public university of Spain. Data was collected using an online questionnaire students could access through a QR code they completed during a teaching period where the assessed teacher was not present. To assess students’ sentiments towards their teachers’ teaching, we asked them the following open-ended question: “If you had to explain a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?”. Sentiment analysis was performed with Microsoft's pre-trained model. For this study, we relied on the probability of the students answer belonging to the negative category. To assess the reliability of the measure, inter-rater agreement between this NLP tool and one of the researchers, who independently coded all answers, was examined. The average pairwise percent agreement and the Cohen’s kappa were calculated with ReCal2. The agreement reached was of 90.8% and Cohen’s kappa .68, both considered satisfactory. To test the hypothesis relations a structural equation model (SEM) was estimated. Results showed that the model fit indices displayed a good fit to the data; χ² (134) = 351.129, p < .001, RMSEA = .07, SRMR = .09, TLI = .91, CFI = .92. Specifically, results show that boredom was negatively predicted by autonomy support practices (β = -.47[-.61, -.33]), whereas for the negative sentiment extracted from SET, this relation was positive (β = .23[.16, .30]). In other words, when students’ opinion towards their instructors’ teaching practices was negative, it was more likely for them to feel bored. Regarding the relations among boredom and student outcomes, results showed a negative predictive value of boredom on students’ motivation to study (β = -.46[-.63, -.29]) and agentic engagement (β = -.24[-.33, -.15]). Altogether, results show a promising future for sentiment analysis techniques in the field of education as they proved the usefulness of this tool when evaluating relations among teaching practices and student outcomes.

Keywords: sentiment analysis, boredom, motivation, agentic engagement

Procedia PDF Downloads 72
183 Formation of the Water Assisted Supramolecular Assembly in the Transition Structure of Organocatalytic Asymmetric Aldol Reaction: A DFT Study

Authors: Kuheli Chakrabarty, Animesh Ghosh, Atanu Roy, Gourab Kanti Das

Abstract:

Aldol reaction is an important class of carbon-carbon bond forming reactions. One of the popular ways to impose asymmetry in aldol reaction is the introduction of chiral auxiliary that binds the approaching reactants and create dissymmetry in the reaction environment, which finally evolves to enantiomeric excess in the aldol products. The last decade witnesses the usage of natural amino acids as chiral auxiliary to control the stereoselectivity in various carbon-carbon bond forming processes. In this context, L-proline was found to be an effective organocatalyst in asymmetric aldol additions. In last few decades the use of water as solvent or co-solvent in asymmetric organocatalytic reaction is increased sharply. Simple amino acids like L-proline does not catalyze asymmetric aldol reaction in aqueous medium not only that, In organic solvent medium high catalytic loading (~30 mol%) is required to achieve moderate to high asymmetric induction. In this context, huge efforts have been made to modify L-proline and 4-hydroxy-L-proline to prepare organocatalyst for aqueous medium asymmetric aldol reaction. Here, we report the result of our DFT calculations on asymmetric aldol reaction of benzaldehyde, p-NO2 benzaldehyde and t-butyraldehyde with a number of ketones using L-proline hydrazide as organocatalyst in wet solvent free condition. Gaussian 09 program package and Gauss View program were used for the present work. Geometry optimizations were performed using B3LYP hybrid functional and 6-31G(d,p) basis set. Transition structures were confirmed by hessian calculation and IRC calculation. As the reactions were carried out in solvent free condition, No solvent effect were studied theoretically. Present study has revealed for the first time, the direct involvement of two water molecules in the aldol transition structures. In the TS, the enamine and the aldehyde is connected through hydrogen bonding by the assistance of two intervening water molecules forming a supramolecular network. Formation of this type of supramolecular assembly is possible due to the presence of protonated -NH2 group in the L-proline hydrazide moiety, which is responsible for the favorable entropy contribution to the aldol reaction. It is also revealed from the present study that, water assisted TS is energetically more favorable than the TS without involving any water molecule. It can be concluded from this study that, insertion of polar group capable of hydrogen bond formation in the L-proline skeleton can lead to a favorable aldol reaction with significantly high enantiomeric excess in wet solvent free condition by reducing the activation barrier of this reaction.

Keywords: aldol reaction, DFT, organocatalysis, transition structure

Procedia PDF Downloads 407
182 Using a Card Game as a Tool for Developing a Design

Authors: Matthias Haenisch, Katharina Hermann, Marc Godau, Verena Weidner

Abstract:

Over the past two decades, international music education has been characterized by a growing interest in informal learning for formal contexts and a "compositional turn" that has moved from closed to open forms of composing. This change occurs under social and technological conditions that permeate 21st-century musical practices. This forms the background of Musical Communities in the (Post)Digital Age (MusCoDA), a four-year joint research project of the University of Erfurt (UE) and the University of Education Karlsruhe (PHK), funded by the German Federal Ministry of Education and Research (BMBF). Both explore songwriting processes as an example of collective creativity in (post)digital communities, one in formal and the other in informal learning contexts. Collective songwriting will be studied from a network perspective, that will allow us to view boundaries between both online and offline as well as formal and informal or hybrid contexts as permeable and to reconstruct musical learning practices. By comparing these songwriting processes, possibilities for a pedagogical-didactic interweaving of different educational worlds are highlighted. Therefore, the subproject of the University of Erfurt investigates school music lessons with the help of interviews, videography, and network maps by analyzing new digital pedagogical and didactic possibilities. In the first step, the international literature on songwriting in the music classroom was examined for design development. The analysis focused on the question of which methods and practices are circulating in the current literature. Results from this stage of the project form the basis for the first instructional design that will help teachers in planning regular music classes and subsequently reconstruct musical learning practices under these conditions. In analyzing the literature, we noticed certain structural methods and concepts that recur, such as the Building Blocks method and the pre-structuring of the songwriting process. From these findings, we developed a deck of cards that both captures the current state of research and serves as a method for design development. With this deck of cards, both teachers and students themselves can plan their individual songwriting lessons by independently selecting and arranging topic, structure, and action cards. In terms of science communication, music educators' interactions with the card game provide us with essential insights for developing the first design. The overall goal of MusCoDA is to develop an empirical model of collective musical creativity and learning and an instructional design for teaching music in the postdigital age.

Keywords: card game, collective songwriting, community of practice, network, postdigital

Procedia PDF Downloads 43
181 In Response to Worldwide Disaster: Academic Libraries’ Functioning During COVID-19 Pandemic Without a Policy

Authors: Dalal Albudaiwi, Mike Allen, Talal Alhaji, Shahnaz Khadimehzadah

Abstract:

As a pandemic, COVID-19 has impacted the whole world since November 2019. In other words, every organization, industry, and institution has been negatively affected by the Coronavirus. The uncertainty of how long the pandemic will last caused chaos at all levels. As with any other institution, public libraries were affected and transmitted into online services and resources. As internationally, have been witnessed that some public libraries were well-prepared for such disasters as the pandemic, and therefore, collections, users, services, technologies, staff, and budgets were all influenced. Public libraries’ policies did not mention any plan regarding such a pandemic. Instead, there are several rules in the guidelines about disasters in general, such as natural disasters. In this pandemic situation, libraries have been involved in different uneasy circumstances. However, it has always been apparent to public libraries the role they play in serving their communities in excellent and critical times. It dwells into the traditional role public libraries play in providing information services and sources to satisfy their information-based community needs. Remarkably increasing people’s awareness of the importance of informational enrichment and enhancing society’s skills in dealing with information and information sources. Under critical circumstances, libraries play a different role. It goes beyond the traditional part of information providers to the untraditional role of being a social institution that serves the community with whatever capabilities they have. This study takes two significant directions. The first focuses on investigating how libraries have responded to COVID-19 and how they manage disasters within their organization. The second direction focuses on how libraries help their communities to act during disasters and how to recover from the consequences. The current study examines how libraries prepare for disasters and the role of public libraries during disasters. We will also propose “measures” to be a model that libraries can use to evaluate the effectiveness of their response to disasters. We intend to focus on how libraries responded to this new disaster. Therefore, this study aims to develop a comprehensive policy that includes responding to a crisis such as Covid-19. An analytical lens inside the libraries as an organization and outside the organization walls will be documented based on analyzing disaster-related literature published in the LIS publication. The study employs content analysis (CA) methodology. CA is widely used in the library and information science. The critical contribution of this work is to propose solutions it provides to libraries and planers to prepare crisis management plans/ policies, specifically to face a new global disaster such as the COVID-19 pandemic. Moreover, the study will help library directors to evaluate their strategies and to improve them properly. The significance of this study lies in guiding libraries’ directors to enhance the goals of the libraries to guarantee crucial issues such as: saving time, avoiding loss, saving budget, acting quickly during a crisis, maintaining libraries’ role during pandemics, finding out the best response to disasters, and creating plan/policy as a sample for all libraries.

Keywords: Covid-19, policy, preparedness, public libraries

Procedia PDF Downloads 59
180 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 70
179 Personal Data Protection: A Legal Framework for Health Law in Turkey

Authors: Veli Durmus, Mert Uydaci

Abstract:

Every patient who needs to get a medical treatment should share health-related personal data with healthcare providers. Therefore, personal health data plays an important role to make health decisions and identify health threats during every encounter between a patient and caregivers. In other words, health data can be defined as privacy and sensitive information which is protected by various health laws and regulations. In many cases, the data are an outcome of the confidential relationship between patients and their healthcare providers. Globally, almost all nations have own laws, regulations or rules in order to protect personal data. There is a variety of instruments that allow authorities to use the health data or to set the barriers data sharing across international borders. For instance, Directive 95/46/EC of the European Union (EU) (also known as EU Data Protection Directive) establishes harmonized rules in European borders. In addition, the General Data Protection Regulation (GDPR) will set further common principles in 2018. Because of close policy relationship with EU, this study provides not only information on regulations, directives but also how they play a role during the legislative process in Turkey. Even if the decision is controversial, the Board has recently stated that private or public healthcare institutions are responsible for the patient call system, for doctors to call people waiting outside a consultation room, to prevent unlawful processing of personal data and unlawful access to personal data during the treatment. In Turkey, vast majority private and public health organizations provide a service that ensures personal data (i.e. patient’s name and ID number) to call the patient. According to the Board’s decision, hospital or other healthcare institutions are obliged to take all necessary administrative precautions and provide technical support to protect patient privacy. However, this application does not effectively and efficiently performing in most health services. For this reason, it is important to draw a legal framework of personal health data by stating what is the main purpose of this regulation and how to deal with complicated issues on personal health data in Turkey. The research is descriptive on data protection law for health care setting in Turkey. Primary as well as secondary data has been used for the study. The primary data includes the information collected under current national and international regulations or law. Secondary data include publications, books, journals, empirical legal studies. Consequently, privacy and data protection regimes in health law show there are some obligations, principles and procedures which shall be binding upon natural or legal persons who process health-related personal data. A comparative approach presents there are significant differences in some EU member states due to different legal competencies, policies, and cultural factors. This selected study provides theoretical and practitioner implications by highlighting the need to illustrate the relationship between privacy and confidentiality in Personal Data Protection in Health Law. Furthermore, this paper would help to define the legal framework for the health law case studies on data protection and privacy.

Keywords: data protection, personal data, privacy, healthcare, health law

Procedia PDF Downloads 193
178 R&D Diffusion and Productivity in a Globalized World: Country Capabilities in an MRIO Framework

Authors: S. Jimenez, R.Duarte, J.Sanchez-Choliz, I. Villanua

Abstract:

There is a certain consensus in economic literature about the factors that have influenced in historical differences in growth rates observed between developed and developing countries. However, it is less clear what elements have marked different paths of growth in developed economies in recent decades. R&D has always been seen as one of the major sources of technological progress, and productivity growth, which is directly influenced by technological developments. Following recent literature, we can say that ‘innovation pushes the technological frontier forward’ as well as encourage future innovation through the creation of externalities. In other words, productivity benefits from innovation are not fully appropriated by innovators, but it also spread through the rest of the economies encouraging absorptive capacities, what have become especially important in a context of increasing fragmentation of production This paper aims to contribute to this literature in two ways, first, exploring alternative indexes of R&D flows embodied in inter-country, inter-sectorial flows of good and services (as approximation to technology spillovers) capturing structural and technological characteristic of countries and, second, analyzing the impact of direct and embodied R&D on the evolution of labor productivity at the country/sector level in recent decades. The traditional way of calculation through a multiregional input-output framework assumes that all countries have the same capabilities to absorb technology, but it is not, each one has different structural features and, this implies, different capabilities as part of literature, claim. In order to capture these differences, we propose to use a weight based on specialization structure indexes; one related with the specialization of countries in high-tech sectors and the other one based on a dispersion index. We propose these two measures because, as far as we understood, country capabilities can be captured through different ways; countries specialization in knowledge-intensive sectors, such as Chemicals or Electrical Equipment, or an intermediate technology effort across different sectors. Results suggest the increasing importance of country capabilities while increasing the trade openness. Besides, if we focus in the country rankings, we can observe that with high-tech weighted R&D embodied countries as China, Taiwan and Germany arose the top five despite not having the highest intensities of R&D expenditure, showing the importance of country capabilities. Additionally, through a fixed effects panel data model we show that, in fact, R&D embodied is important to explain labor productivity increases, in fact, even more that direct R&D investments. This is reflecting that globalization is more important than has been said until now. However, it is true that almost all analysis done in relation with that consider the effect of t-1 direct R&D intensity over economic growth. Nevertheless, from our point of view R&D evolve as a delayed flow and it is necessary some time to be able to see its effects on the economy, as some authors have already claimed. Our estimations tend to corroborate this hypothesis obtaining a gap between 4-5 years.

Keywords: economic growth, embodied, input-output, technology

Procedia PDF Downloads 108
177 Electron Bernstein Wave Heating in the Toroidally Magnetized System

Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten

Abstract:

The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.

Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS

Procedia PDF Downloads 70
176 How to Assess the Attractiveness of Business Location According to the Mainstream Concepts of Comparative Advantages

Authors: Philippe Gugler

Abstract:

Goal of the study: The concept of competitiveness has been addressed by economic theorists and policymakers for several hundreds of years, with both groups trying to understand the drivers of economic prosperity and social welfare. The goal of this contribution is to address the major useful theoretical contributions that permit to identify the main drivers of a territory’s competitiveness. We first present the major contributions found in the classical and neo-classical theories. Then, we concentrate on two majors schools providing significant thoughts on the competitiveness of locations: the Economic Geography (EG) School and the International Business (IB) School. Methodology: The study is based on a literature review of the classical and neo-classical theories, on the economic geography theories and on the international business theories. This literature review establishes links between these theoretical mainstreams. This work is based on the academic framework establishing a meaningful literature review aimed to respond to our research question and to develop further research in this field. Results: The classical and neo-classical pioneering theories provide initial insights that territories are different and that these differences explain the discrepancies in their levels of prosperity and standards of living. These theories emphasized different factors impacting the level and the growth of productivity in a given area and therefore the degree of their competitiveness. However, these theories are not sufficient to more precisely identify the drivers and enablers of location competitiveness and to explain, in particular, the factors that drive the creation of economic activities, the expansion of economic activities, the creation of new firms and the attraction of foreign firms. Prosperity is due to economic activities created by firms. Therefore, we need more theoretical insights to scrutinize the competitive advantages of territories or, in other words, their ability to offer the best conditions that enable economic agents to achieve higher rates of productivity in open markets. Two major theories provide, to a large extent, the needed insights: the economic geography theory and the international business theory. The economic geography studies scrutinized in this study from Marshall to Porter, aim to explain the drivers of the concentration of specific industries and activities in specific locations. These activity agglomerations may be due to the creation of new enterprises, the expansion of existing firms, and the attraction of firms located elsewhere. Regarding this last possibility, the international business (IB) theories focus on the comparative advantages of locations as far as multinational enterprises (MNEs) strategies are concerned. According to international business theory, the comparative advantages of a location serves firms not only by exploiting their ownership advantages (mostly as far as market seeking, resource seeking and efficiency seeking investments are concerned) but also by augmenting and/or creating new ownership advantages (strategic asset seeking investments). The impact of a location on the competitiveness of firms is considered from both sides: the MNE’s home country and the MNE’s host country.

Keywords: competitiveness, economic geography, international business, attractiveness of businesses

Procedia PDF Downloads 117
175 Acrylate-Based Photopolymer Resin Combined with Acrylated Epoxidized Soybean Oil for 3D-Printing

Authors: Raphael Palucci Rosa, Giuseppe Rosace

Abstract:

Stereolithography (SLA) is one of the 3D-printing technologies that has been steadily growing in popularity for both industrial and personal applications due to its versatility, high accuracy, and low cost. Its printing process consists of using a light emitter to solidify photosensitive liquid resins layer-by-layer to produce solid objects. However, the majority of the resins used in SLA are derived from petroleum and characterized by toxicity, stability, and recalcitrance to degradation in natural environments. Aiming to develop an eco-friendly resin, in this work, different combinations of a standard commercial SLA resin (Peopoly UV professional) with a vegetable-based resin were investigated. To reach this goal, different mass concentrations (varying from 10 to 50 wt%) of acrylated epoxidized soybean oil (AESO), a vegetable resin produced from soyabean oil, were mixed with a commercial acrylate-based resin. 1.0 wt% of Diphenyl(2,4,6-trimethylbenzoyl) phosphine oxide (TPO) was used as photo-initiator, and the samples were printed using a Peopoly moai 130. The machine was set to operate at standard configurations when printing commercial resins. After the print was finished, the excess resin was drained off, and the samples were washed in isopropanol and water to remove any non-reacted resin. Finally, the samples were post-cured for 30 min in a UV chamber. FT-IR analysis was used to confirm the UV polymerization of the formulated resin with different AESO/Peopoly ratios. The signals from 1643.7 to 1616, which corresponds to the C=C stretching of the AESO acrylic acids and Peopoly acrylic groups, significantly decreases after the reaction. The signal decrease indicates the consumption of the double bonds during the radical polymerization. Furthermore, the slight change of the C-O-C signal from 1186.1 to 1159.9 decrease of the signals at 809.5 and 983.1, which corresponds to unsaturated double bonds, are both proofs of the successful polymerization. Mechanical analyses showed a decrease of 50.44% on tensile strength when adding 10 wt% of AESO, but it was still in the same range as other commercial resins. The elongation of break increased by 24% with 10 wt% of AESO and swelling analysis showed that samples with a higher concentration of AESO mixed absorbed less water than their counterparts. Furthermore, high-resolution prototypes were printed using both resins, and visual analysis did not show any significant difference between both products. In conclusion, the AESO resin was successful incorporated into a commercial resin without affecting its printability. The bio-based resin showed lower tensile strength than the Peopoly resin due to network loosening, but it was still in the range of other commercial resins. The hybrid resin also showed better flexibility and water resistance than Peopoly resin without affecting its resolution. Finally, the development of new types of SLA resins is essential to provide new sustainable alternatives to the commercial petroleum-based ones.

Keywords: 3D-printing, bio-based, resin, soybean, stereolithography

Procedia PDF Downloads 107
174 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 143
173 Application of a Submerged Anaerobic Osmotic Membrane Bioreactor Hybrid System for High-Strength Wastewater Treatment and Phosphorus Recovery

Authors: Ming-Yeh Lu, Shiao-Shing Chen, Saikat Sinha Ray, Hung-Te Hsu

Abstract:

Recently, anaerobic membrane bioreactors (AnMBRs) has been widely utilized, which combines anaerobic biological treatment process and membrane filtration, that can be present an attractive option for wastewater treatment and water reuse. Conventional AnMBR is having several advantages, such as improving effluent quality, compact space usage, lower sludge yield, without aeration and production of energy. However, the removal of nitrogen and phosphorus in the AnMBR permeate was negligible which become the biggest disadvantage. In recent years, forward osmosis (FO) is an emerging technology that utilizes osmotic pressure as driving force to extract clean water without additional external pressure. The pore size of FO membrane is kindly mentioned the pore size, so nitrogen or phosphorus could effectively improve removal of nitrogen or phosphorus. Anaerobic bioreactor with FO membrane (AnOMBR) can retain the concentrate organic matters and nutrients. However, phosphorus is a non-renewable resource. Due to the high rejection property of FO membrane, the high amount of phosphorus could be recovered from the combination of AnMBR and FO. In this study, development of novel submerged anaerobic osmotic membrane bioreactor integrated with periodic microfiltration (MF) extraction for simultaneous phosphorus and clean water recovery from wastewater was evaluated. A laboratory-scale AnOMBR utilizes cellulose triacetate (CTA) membranes with effective membrane area of 130 cm² was fully submerged into a 5.5 L bioreactor at 30-35℃. Active layer-facing feed stream orientation was utilized, for minimizing fouling and scaling. Additionally, a peristaltic pump was used to circulate draw solution (DS) at a cross flow velocity of 0.7 cm/s. Magnesium sulphate (MgSO₄) solution was used as DS. Microfiltration membrane periodically extracted about 1 L solution when the TDS reaches to 5 g/L to recover phosphorus and simultaneous control the salt accumulation in the bioreactor. During experiment progressed, the average water flux was achieved around 1.6 LMH. The AnOMBR process show greater than 95% removal of soluble chemical oxygen demand (sCOD), nearly 100% of total phosphorous whereas only partial removal of ammonia, and finally average methane production of 0.22 L/g sCOD was obtained. Therefore, AnOMBR system periodically utilizes MF membrane extracted for phosphorus recovery with simultaneous pH adjustment. The overall performance demonstrates that a novel submerged AnOMBR system is having potential for simultaneous wastewater treatment and resource recovery from wastewater, and hence, the new concept of this system can be used to replace for conventional AnMBR in the future.

Keywords: anaerobic treatment, forward osmosis, phosphorus recovery, membrane bioreactor

Procedia PDF Downloads 238
172 The Four Pillars of Islamic Design: A Methodology for an Objective Approach to the Design and Appraisal of Islamic Urban Planning and Architecture Based on Traditional Islamic Religious Knowledge

Authors: Azzah Aldeghather, Sara Alkhodair

Abstract:

In the modern urban planning and architecture landscape, with western ideologies and styles becoming the mainstay of experience and definitions globally, the Islamic world requires a methodology that defines its expression, which transcends cultural, societal, and national styles. This paper will propose a methodology as an objective system to define, evaluate and apply traditional Islamic knowledge to Islamic urban planning and architecture, providing the Islamic world with a system to manifest its approach to design. The methodology is expressed as Four Pillars which are based on traditional meanings of Arab words roughly translated as Pillar One: The Principles (Al Mabade’), Pillar Two: The Foundations (Al Asas), Pillar Three: The Purpose (Al Ghaya), Pillar Four: Presence (Al Hadara). Pillar One: (The Principles) expresses the unification (Tawheed) pillar of Islam: “There is no God but God” and is comprised of seven principles listed as: 1. Human values (Qiyam Al Insan), 2. Universal language as sacred geometry, 3. Fortitude© and Benefitability©, 4. Balance and Integration: conjoining the opposites, 5. Man, time, and place, 6. Body, mind, spirit, and essence, 7. Unity of design expression to achieve unity, harmony, and security in design. Pillar Two: The Foundations is based on two foundations: “Muhammad is the Prophet of God” and his relationship to the renaming of Medina City as a prototypical city or place, which defines a center space for collection conjoined by an analysis of the Medina Charter as a base for the humanistic design. Pillar Three: The Purpose (Al Ghaya) is comprised of four criteria: The naming of the design as a title, the intention of the design as an end goal, the reasoning behind the design, and the priorities of expression. Pillar Four: Presence (Al Hadara) is usually translated as a civilization; in Arabic, the root of Hadara is to be present. This has five primary definitions utilized to express the act of design: Wisdom (Hikma) as a philosophical concept, Identity (Hawiya) of the form, and Dialogue (Hiwar), which are the requirements of the project vis-a-vis what the designer wishes to convey, Expression (Al Ta’abeer) the designer wishes to apply, and Resources (Mawarid) available. The Proposal will provide examples, where applicable, of past and present designs that exemplify the manifestation of the Pillars. The proposed methodology endeavors to return Islamic urban planning and architecture design to its a priori position as a leading design expression adaptable to any place, time, and cultural expression while providing a base for analysis that transcends the concept of style and external form as a definition and expresses the singularity of the esoteric “Spiritual” aspects in a rational, principled, and logical manner clearly addressed in Islam’s essence.

Keywords: Islamic architecture, Islamic design, Islamic urban planning, principles of Islamic design

Procedia PDF Downloads 75
171 Biostratigraphic Significance of Shaanxilithes ningqiangensis from the Tal Group (Cambrian), Nigalidhar Syncline, Lesser Himalaya, India and Its GC-MS Analysis

Authors: C. A. Sharma, Birendra P. Singh

Abstract:

We recovered 40 well preserved ribbon-shaped, meandering specimens of S. ningqiangensis from the Earthy Dolomite Member (Krol Group) and calcareous siltstone beds of the Earthy Siltstone Member (Tal Group) showing closely spaced annulations that lacked branching. The beginning and terminal points are indistinguishable. In certain cases, individual specimens are characterized by irregular, low-angle to high-angle sinuosity. It has been variously described as body fossil, ichnofossil and algae. Detailed study of this enigmatic fossil is needed to resolve the long standing controversy regarding its phylogenetic and stratigraphic placements, which will be an important contribution to the evolutionary history of metazoans. S. ningqiangensis has been known from the late Neoproterozoic (Ediacaran) of southern and central China (Sichuan, Shaanxi, Quinghai and Guizhou provinces and Ningxia Hui Autonomous region), Siberian platform and across Pc/C Boundary from latest Neoprterozoic to earliest Cambrian of northern India. Shaanxilithes is considered an Ediacaran organism that spans the Precambrian–Cambrian boundary, an interval marked by significant taphonomic and ecological transformations that include not only innovation but also probable extinction. All the past well constrained finds of S. ningqiangensis are restricted to Ediacaran age. However, due to the new recoveries of the fossil from Nigalidhar Syncline, the stratigraphic status of S. ningqiangensis-bearing Earthy Siltstone Member of the Shaliyan Formation of the Tal Group (Cambrian) is rendered uncertain, though the overlying Chert Member in the adjoining Korgai Syncline has yielded definite early Cambrian acritarchs. The moot question is whether the Earthy Siltstone Member represents an Ediacaran or an early Cambrian age?. It would be interesting to find if Shaanxilithes, so far known from Ediacaran sequences, could it transgress to the early Cambrian or in simple words could it withstand the Pc/C Boundary event? GC-MS data shows the S. ningqiangensis structure is formed by hydrocarbon organic compounds which are filled with inorganic elements filler like silica, Calcium, phosphorus etc. The S. ningqiangensis structure is a mixture of organic compounds of high molecular weight, containing several saturated rings with hydrocarbon chains having an occasional isolated carbon-carbon double bond and also containing, in addition, to small amounts of nitrogen, sulfur and oxygen. Data also revealed that the presence of nitrogen which would be either in the form of peptide chains means amide/amine or chemical form i.e. nitrates/nitrites etc. The formula weight and the weight ratio of C/H shows that it would be expected for algae derived organics, since algae produce fatty acids as well as other hydrocarbons such as cartenoids.

Keywords: GC-MS Analysis, lesser himalaya, Pc/C Boundary, shaanxilithes

Procedia PDF Downloads 233
170 The Construction Women Self in Law: A Case of Medico-Legal Jurisprudence Textbooks in Rape Cases

Authors: Rahul Ranjan

Abstract:

Using gender as a category to cull out historical analysis, feminist scholars have produced plethora of literature on the sexual symbolics and carnal practices of modern European empires. At a symbolic level, the penetration and conquest of faraway lands was charged with sexual significance and intrigue. The white male’s domination and possession of dark and fertile lands in Africa, Asia and the Americas offered, in Anne McClintock’s words, ‘a fantastic magic lantern of the mind onto which Europe projected its forbidden sexual desires and fears’. The politics of rape were also symbolically a question significant to the politics of empire. To the colonized subject, rape was a fearsome factor, a language that spoke of violent and voracious nature of imperial exploitation. The colonized often looked at rape as an act which colonizers used as tool of oppression. The rape as act of violence got encoded into the legal structure under the helm of Lord Macaulay in the so called ‘Age of Reform’ in 1860 under IPC (Indian penal code). Initially Lord Macaulay formed Indian Law Commission in 1837 in which he drafted a bill and defined the ‘crime of rape as sexual intercourse by a man to a woman against her will and without her consent , except in cases involving girls under nine years of age where consent was immaterial’. The modern English law of rape formulated under the colonial era introduced twofold issues to the forefront. On the one hand it deployed ‘technical experts’ who wrote textbooks of medical jurisprudence that were used as credential citation to make case more ‘objective’, while on the other hand the presumptions about barbaric subjects, the colonized women’s body that was docile which is prone to adultery reflected in cases. The untrustworthiness of native witness also remained an imperative for British jurists to put extra emphasis making ‘objective’ and ‘presumptuous’. This sort of formulation put women down on the pedestrian of justice because it disadvantaged her doubly through British legality and their thinking about the rape. The Imperial morality that acted as vanguards of women’s chastity coincided language of science propagated in the post-enlightenment which not only annulled non-conformist ideas but also made itself a hegemonic language, was often used as a tool and language in encoding of law. The medico-legal understanding of rape in the colonial India has its clear imprints in the post-colonial legality. The onus on the part of rape’s victim was dictated for the longest time and still continues does by widely referred idea that ‘there should signs, marks of resistance on the body of the victim’ otherwise it is likely to be considered consensual. Having said so, this paper looks at the textual continuity that had prolonged the colonial construct of women’s body and the self.

Keywords: body, politics, textual construct, phallocentric

Procedia PDF Downloads 352