Search results for: maximum power point
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13760

Search results for: maximum power point

1010 Politics in Academia: How the Diffusion of Innovation Relates to Professional Capital

Authors: Autumn Rooms Cypres, Barbara Driver

Abstract:

The purpose of this study is to extend discussions about innovations and career politics. Research questions that grounded this effort were: How does an academic learn the unspoken rules of the academy? What happens politically to an academic’s career when their research speaks against the grain of society? Do professors perceive signals that it is time to move on to another institution or even to another career? Epistemology and Methods: This qualitative investigation was focused on examining perceptions of academics. Therefore an open-ended field study, based on Grounded Theory, was used. This naturalistic paradigm (Lincoln & Guba,1985) was selected because it tends to understand information in terms of whole, of patterns, and in relations to the context of the environment. The technique for gathering data was the process of semi-structured, in-depth interviewing. Twenty five academics across the United States were interviewed relative to their career trajectories and the politics and opportunities they have encountered in relation to their research efforts. Findings: The analysis of interviews revealed four themes: Academics are beholden to 2 specific networks of power that influence their sense of job security; the local network based on their employing university and the national network of scholars who share the same field of research. The fights over what counts as research can and does drift from the intellectual to the political, and personal. Academic were able to identify specific instances of shunning and or punishment from their colleagues related directly to the dissemination of research that spoke against the grain of the local or national networks. Academics identified specific signals from both of these networks indicating that their career was flourishing or withering. Implications: This research examined insights from those who persevered when the fights over what and who counts drifted from the intellectual to the political, and the personal. Considerations of why such drifts happen were offered in the form of a socio-political construct called Fit, which included thoughts on hegemony, discourse, and identity. This effort reveals the importance of understanding what professional capital is relative to job security. It also reveals that fear is an enmeshed and often unspoken part of the culture of Academia. Further research to triangulate these findings would be helpful within international contexts.

Keywords: politics, academia, job security, context

Procedia PDF Downloads 313
1009 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 192
1008 Developing Alternatives: Citizens Perspectives on Causes and Ramification of Political Conflict in Ivory Coast from 2002 - 2009

Authors: Suaka Yaro

Abstract:

This article provides an alternative examination of the causes and the ramifications of the Ivorian political conflict from 2002 to 2009. The researcher employed a constructivist epistemology and qualitative study based upon fieldwork in different African cities interviewing Ivorians outside and within Ivory Coast. A purposive sampling of fourteen participants was selected. A purposive sampling was used to select fourteen respondents. The respondents were selected based on their involvement in Ivorian conflict. Their experiences on the causes and effects of the conflict were tapped for analysis. Qualitative methodology was used for the study. The data collection instruments were semi-structured interview questions, open-ended semi-structured questionnaire, and documentary analysis. The perceptions of these participants on the causes, effects and the possible solution to the endemic conflict in their homeland hold key perspectives that have hitherto been ignored in the whole debate about the Ivorian political conflict and its legacies. Finally, from the synthesized findings of the investigation, the researcher concluded that the analysed data revealed that the causes of the conflict were competition for scarce resources, bad governance, media incitement, xenophobia, incessant political power struggle and the proliferation of small firearms entering the country. The effects experienced during the conflict were the human rights violation, destruction of property including UN premises and displaced people both internally and externally. Some recommendations made include: Efforts should be made by the government to strengthen good relationship among different ethnic groups and help them adapt to new challenges that confront democratic developments in the country. The government should organise the South African style of Truth and Reconciliation Commission to revisit the horrors of the past in order to heal wounds and prevent future occurrence of the conflict. Employment opportunities and other income generating ventures for Ivorian should be created by the government by attracting local and foreign investors. The numerous rebels should be given special skills training in other for them to be able to live among the communities in Ivory Coast. Government of national unity should be encouraged in situation like this.

Keywords: displaced, federalism, pluralism, identity politics, grievance, eligibility, greed

Procedia PDF Downloads 214
1007 Ambient Factors in the Perception of Crowding in Public Transport

Authors: John Zacharias, Bin Wang

Abstract:

Travel comfort is increasingly seen as crucial to effecting the switch from private motorized modes to public transit. Surveys suggest that travel comfort is closely related to perceived crowding, that may involve lack of available seating, difficulty entering and exiting, jostling and other physical contacts with strangers. As found in studies on environmental stress, other factors may moderate perceptions of crowding–in this case, we hypothesize that the ambient environment may play a significant role. Travel comfort was measured by applying a structured survey to randomly selected passengers (n=369) on 3 lines of the Beijing metro on workdays. Respondents were standing with all seats occupied and with car occupancy at 14 levels. A second research assistant filmed the metro car while passengers were interviewed, to obtain the total number of passengers. Metro lines 4, 6 and 10 were selected that travel through the central city north-south, east-west and circumferentially. Respondents evaluated the following factors: crowding, noise, smell, air quality, temperature, illumination, vibration and perceived safety as they experienced them at the time of interview, and then were asked to rank these 8 factors according to their importance for their travel comfort. Evaluations were semantic differentials on a 7-point scale from highly unsatisfactory (-3) to highly satisfactory (+3). The control variables included age, sex, annual income and trip purpose. Crowding was assessed most negatively, with 41% of the scores between -3 and -2. Noise and air quality were also assessed negatively, with two-thirds of the evaluations below 0. Illumination was assessed most positively, followed by crime, vibration and temperature, all scoring at indifference (0) or slightly positive. Perception of crowding was linearly and positively related to the number of passengers in the car. Linear regression tested the impact of ambient environmental factors on perception of crowding. Noise intensity accounted for more than the actual number of individuals in the car in the perception of crowding, with smell also contributing. Other variables do not interact with the crowding variable although the evaluations are distinct. In all, only one-third of the perception of crowding (R2=.154) is explained by the number of people, with the other ambient environmental variables accounting for two-thirds of the variance (R2=.316). However, when ranking the factors by their importance to travel comfort, perceived crowding made up 69% of the first rank, followed by noise at 11%. At rank 2, smell dominates (25%), followed by noise and air quality (17%). Commuting to work induces significantly lower evaluations of travel comfort with shopping the most positive. Clearly, travel comfort is particularly important to commuters. Moreover, their perception of crowding while travelling on metro is highly conditioned by the ambient environment in the metro car. Focussing attention on the ambient environmental conditions of the metro is an effective way to address the primary concerns of travellers with overcrowding. In general, the strongly held opinions on travel comfort require more attention in the effort to induce ridership in public transit.

Keywords: ambient environment, mass rail transit, public transit, travel comfort

Procedia PDF Downloads 257
1006 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada

Authors: Stefan W. Kienzle

Abstract:

The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.

Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes

Procedia PDF Downloads 84
1005 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays

Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner

Abstract:

Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.

Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation

Procedia PDF Downloads 285
1004 Michel Foucault’s Docile Bodies and The Matrix Trilogy: A Close Reading Applied to the Human Pods and Growing Fields in the Films

Authors: Julian Iliev

Abstract:

The recent release of The Matrix Resurrections persuaded many film scholars that The Matrix trilogy had lost its appeal and its concepts were largely outdated. This study examines the human pods and growing fields in the trilogy. Their functionality is compared to Michel Foucault’s concept of docile bodies: linking fictional and contemporary worlds. This paradigm is scrutinized through surveillance literature. The analogy brings to light common elements of hidden surveillance practices in technologies. The comparison illustrates the effects of body manipulation portrayed in the movies and their relevance with contemporary surveillance practices. Many scholars have utilized a close reading methodology in film studies (J.Bizzocchi, J.Tanenbaum, P.Larsen, S. Herbrechter, and Deacon et al.). The use of a particular lens through which media text is examined is an indispensable factor that needs to be incorporated into the methodology. The study spotlights both scenes from the trilogy depicting the human pods and growing fields. The functionality of the pods and the fields compare directly with Foucault’s concept of docile bodies. By utilizing Foucault’s study as a lens, the research will unearth hidden components and insights into the films. Foucault recognizes three disciplines that produce docile bodies: 1) manipulation and the interchangeability of individual bodies, 2) elimination of unnecessary movements and management of time, and 3) command system guaranteeing constant supervision and continuity protection. These disciplines can be found in the pods and growing fields. Each body occupies a single pod aiding easier manipulation and fast interchangeability. The movement of the bodies in the pods is reduced to the absolute minimum. Thus, the body is transformed into the ultimate object of control – minimum movement correlates to maximum energy generation. Supervision is exercised by wiring the body with numerous types of cables. This ultimate supervision of body activity reduces the body’s purpose to mere functioning. If a body does not function as an energy source, then it’s unplugged, ejected, and liquefied. The command system secures the constant supervision and continuity of the process. To Foucault, the disciplines are distinctly different from slavery because they stop short of a total takeover of the bodies. This is a clear difference from the slave system implemented in the films. Even though their system might lack sophistication, it makes up for it in the elevation of functionality. Further, surveillance literature illustrates the connection between the generation of body energy in The Matrix trilogy to the generation of individual data in contemporary society. This study found that the three disciplines producing docile bodies were present in the portrayal of the pods and fields in The Matrix trilogy. The above comparison combined with surveillance literature yields insights into analogous processes and contemporary surveillance practices. Thus, the constant generation of energy in The Matrix trilogy can be equated to the consistent data generation in contemporary society. This essay shows the relevance of the body manipulation concept in the Matrix films with contemporary surveillance practices.

Keywords: docile bodies, film trilogies, matrix movies, michel foucault, privacy loss, surveillance

Procedia PDF Downloads 84
1003 Newly Designed Ecological Task to Assess Cognitive Map Reading Ability: Behavioral Neuro-Anatomic Correlates of Mental Navigation

Authors: Igor Faulmann, Arnaud Saj, Roland Maurer

Abstract:

Spatial cognition consists in a plethora of high level cognitive abilities: among them, the ability to learn and to navigate in large scale environments is probably one of the most complex skills. Navigation is thought to rely on the ability to read a cognitive map, defined as an allocentric representation of ones environment. Those representations are of course intimately related to the two geometrical primitives of the environment: distance and direction. Also, many recent studies point to a predominant hippocampal and para-hippocampal role in spatial cognition, as well as in the more specific cluster of navigational skills. In a previous study in humans, we used a newly validated test assessing cognitive map processing by evaluating the ability to judge relative distances and directions: the CMRT (Cognitive Map Recall Test). This study identified in topographically disorientated patients (1) behavioral differences between the evaluation of distances and of directions, and (2) distinct causality patterns assessed via VLSM (i.e., distinct cerebral lesions cause distinct response patterns depending on the modality (distance vs direction questions). Thus, we hypothesized that: (1) if the CMRT really taps into the same resources as real navigation, there would be hippocampal, parahippocampal, and parietal activation, and (2) there exists underlying neuroanatomical and functional differences between the processing of this two modalities. Aiming toward a better understanding of the neuroanatomical correlates of the CMRT in humans, and more generally toward a better understanding of how the brain processes the cognitive map, we adapted the CMRT as an fMRI procedure. 23 healthy subjects (11 women, 12 men), all living in Geneva for at least 2 years, underwent the CMRT in fMRI. Results show, for distance and direction taken together, than the most active brain regions are the parietal, frontal and cerebellar parts. Additionally, and as expected, patterns of brain activation differ when comparing the two modalities. Furthermore, distance processing seems to rely more on parietal regions (compared to other brain regions in the same modality and also to direction). It is interesting to notice that no significant activity was observed in the hippocampal or parahippocampal areas. Direction processing seems to tap more into frontal and cerebellar brain regions (compared to other brain regions in the same modality and also to distance). Significant hippocampal and parahippocampal activity has been shown only in this modality. This results demonstrated a complex interaction of structures which are compatible with response patterns observed in other navigational tasks, thus showing that the CMRT taps at least partially into the same brain resources as real navigation. Additionally, differences between the processing of distances and directions leads to the conclusion that the human brain processes each modality distinctly. Further research should focus on the dynamics of this processing, allowing a clearer understanding between the two sub-processes.

Keywords: cognitive map, navigation, fMRI, spatial cognition

Procedia PDF Downloads 287
1002 Curcumin and Its Analogues: Potent Natural Antibacterial Compounds against Staphylococcus aureus

Authors: Prince Kumar, Shamseer Kulangara Kandi, Diwan S. Rawat, Kasturi Mukhopadhyay

Abstract:

Staphylococcus aureus is the most pathogenic of all staphylococci, a major cause of nosocomial infections, and known for acquiring resistance towards various commonly used antibiotics. Due to the widespread use of synthetic drugs, clinicians are now facing a serious threat in healthcare. The increasing resistance in staphylococci has created a need for alternatives to these synthetic drugs. One of the alternatives is a natural plant-based medicine for both disease prevention as well as the treatment of chronic diseases. Among such natural compounds, curcumin is one of the most studied molecules and has been an integral part of traditional medicines and Ayurveda from ancient times. It is a natural polyphenolic compound with diverse pharmacological effects, including anti-inflammatory, antioxidant, anti-cancerous and antibacterial activities. In spite of its efficacy and potential, curcumin has not been approved as a therapeutic agent yet, because of its low solubility, low bioavailability, and rapid metabolism in vivo. The presence of central β-diketone moiety in curcumin is responsible for its rapid metabolism. To overcome this, in the present study, curcuminoids were designed by modifying the central β-diketone moiety of curcumin into mono carbonyl moiety and their antibacterial potency against S. aureus ATCC 29213 was determined. Further, the mode of action and hemolytic activity of the most potent curcuminoids were studied. Minimum inhibitory concentration (MIC) and in vitro killing kinetics were used to study the antibacterial activity of the designed curcuminoids. For hemolytic assay, mouse Red blood cells were incubated with curcuminoids and hemoglobin release was measured spectrophotometrically. The mode of action of curcuminoids was analysed by membrane depolarization assay using membrane potential sensitive dye 3,3’-dipropylthiacarbocyanine iodide (DiSC3(5)) through spectrofluorimetry and membrane permeabilization assay using calcein-AM through flow cytometry. Antibacterial screening of the designed library (61 curcuminoids) revealed excellent in vitro potency of six compounds against S. aureus (MIC 8 to 32 µg/ml). Moreover, these six compounds were found to be non-hemolytic up to 225 µg/ml that is much higher than their corresponding MIC values. The in vitro killing kinetics data showed five of these lead compounds to be bactericidal causing >3 log reduction in the viable cell count within 4 hrs at 5 × MIC while the sixth compound was found to be bacteriostatic. Depolarization assay revealed that all the six curcuminoids caused depolarization in their corresponding MIC range. Further, the membrane permeabilization assay showed that all the six curcuminoids caused permeabilization at 5 × MIC in 2 hrs. This membrane depolarization and permeabilization caused by curcuminoids found to be in correlation with their corresponding killing efficacy. Both these assays point out that membrane perturbations might be a primary mode of action for these curcuminoids. Overall, the present study leads us six water soluble, non-hemolytic, membrane-active curcuminoids and provided an impetus for further research on therapeutic use of these lead curcuminoids against S. aureus.

Keywords: antibacterial, curcumin, minimum inhibitory concentration , Staphylococcus aureus

Procedia PDF Downloads 163
1001 A Metric to Evaluate Conventional and Electrified Vehicles in Terms of Customer-Oriented Driving Dynamics

Authors: Stephan Schiffer, Andreas Kain, Philipp Wilde, Maximilian Helbing, Bernard Bäker

Abstract:

Automobile manufacturers progressively focus on a downsizing strategy to meet the EU's CO2 requirements concerning type-approval consumption cycles. The reduction in naturally aspirated engine power is compensated by increased levels of turbocharging. By downsizing conventional engines, CO2 emissions are reduced. However, it also implicates major challenges regarding longitudinal dynamic characteristics. An example of this circumstance is the delayed turbocharger-induced torque reaction which leads to a partially poor response behavior of the vehicle during acceleration operations. That is why it is important to focus conventional drive train design on real customer driving again. The currently considered dynamic maneuvers like the acceleration time 0-100 km/h discussed by journals and car manufacturers describe longitudinal dynamics experienced by a driver inadequately. For that reason we present the realization and evaluation of a comprehensive proband study. Subjects are provided with different vehicle concepts (electrified vehicles, vehicles with naturally aspired engines and vehicles with different concepts of turbochargers etc.) in order to find out which dynamic criteria are decisive for a subjectively strong acceleration and response behavior of a vehicle. Subsequently, realistic acceleration criteria are derived. By weighing the criteria an evaluation metric is developed to objectify customer-oriented transient dynamics. Fully-electrified vehicles are the benchmark in terms of customer-oriented longitudinal dynamics. The electric machine provides the desired torque almost without delay. This advantage compared to combustion engines is especially noticeable at low engine speeds. In conclusion, we will show the degree to which extent customer-relevant longitudinal dynamics of conventional vehicles can be approximated to electrified vehicle concepts. Therefore, various technical measures (turbocharger concepts, 48V electrical chargers etc.) and drive train designs (e.g. varying the final drive) are presented and evaluated in order to strengthen the vehicle’s customer-relevant transient dynamics. As a rating size the newly developed evaluation metric will be used.

Keywords: 48V, customer-oriented driving dynamics, electric charger, electrified vehicles, vehicle concepts

Procedia PDF Downloads 397
1000 Piled Critical Size Bone-Biomimetic and Biominerizable Nanocomposites: Formation of Bioreactor-Induced Stem Cell Gradients under Perfusion and Compression

Authors: W. Baumgartner, M. Welti, N. Hild, S. C. Hess, W. J. Stark, G. Meier Bürgisser, P. Giovanoli, J. Buschmann

Abstract:

Perfusion bioreactors are used to solve problems in tissue engineering in terms of sufficient nutrient and oxygen supply. Such problems especially occur in critical size grafts because vascularization is often too slow after implantation ending up in necrotic cores. Biominerizable and biocompatible nanocomposite materials are attractive and suitable scaffold materials for bone tissue engineering because they offer mineral components in organic carriers – mimicking natural bone tissue. In addition, human adipose derived stem cells (ASCs) can potentially be used to increase bone healing as they are capable of differentiating towards osteoblasts or endothelial cells among others. In the present study, electrospun nanocomposite disks of poly-lactic-co-glycolic acid and amorphous calcium phosphate nanoparticles (PLGA/a-CaP) were seeded with human ASCs and eight disks were stacked in a bioreactor running with normal culture medium (no differentiation supplements). Under continuous perfusion and uniaxial cyclic compression, load-displacement curves as a function of time were assessed. Stiffness and energy dissipation were recorded. Moreover, stem cell densities in the layers of the piled scaffold were determined as well as their morphologies and differentiation status (endothelial cell differentiation, chondrogenesis and osteogenesis). While the stiffness of the cell free constructs increased over time caused by the transformation of the a-CaP nanoparticles into flake-like apatite, ASC-seeded constructs showed a constant stiffness. Stem cell density gradients were histologically determined with a linear increase in the flow direction from the bottom to the top of the 3.5 mm high pile (r2 > 0.95). Cell morphology was influenced by the flow rate, with stem cells getting more roundish at higher flow rates. Less than 1 % osteogenesis was found upon osteopontin immunostaining at the end of the experiment (9 days), while no endothelial cell differentiation and no chondrogenesis was triggered under these conditions. All ASCs had mainly remained in their original pluripotent status within this time frame. In summary, we have fabricated a critical size bone graft based on a biominerizable bone-biomimetic nanocomposite with preserved stiffness when seeded with human ASCs. The special feature of this bone graft was that ASC densities inside the piled construct varied with a linear gradient, which is a good starting point for tissue engineering interfaces such as bone-cartilage where the bone tissue is cell rich while the cartilage exhibits low cell densities. As such, this tissue-engineered graft may act as a bone-cartilage interface after the corresponding differentiation of the ASCs.

Keywords: bioreactor, bone, cartilage, nanocomposite, stem cell gradient

Procedia PDF Downloads 300
999 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog

Authors: Ana Flavia Belchior De Andrade

Abstract:

Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.

Keywords: backlog, forensic laboratory, quality management, accreditation

Procedia PDF Downloads 112
998 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 141
997 Strategies for Good Governance during Crisis in Higher Education

Authors: Naziema B. Jappie

Abstract:

Over the last 23 years leaders in government, political parties and universities have been spending much time on identifying and discussing various gaps in the system that impact systematically on students especially those from historically Black communities. Equity and access to higher education were two critical aspects that featured in achieving the transformation goals together with a funding model for those previously disadvantaged. Free education was not a feasible option for the government. Institutional leaders in higher education face many demands on their time and resources. Often, the time for crisis management planning or consideration of being proactive and preventative is not a standing agenda item. With many issues being priority in academia, people become complacent and think that crisis may not affect them or they will cross the bridge when they get to it. Historically South Africa has proven to be a country of militancy, strikes and protests in most industries, some leading to disastrous outcomes. Higher education was not different between October 2015 and late 2016 when the #Rhodes Must Fall which morphed into the # Fees Must Fall protest challenged the establishment, changed the social fabric of universities, bringing the sector to a standstill. Some institutional leaders and administrators were better at handling unexpected, high-consequence situations than others. At most crisis leadership is viewed as a situation more than a style of leadership which is usually characterized by crisis management. The objective of this paper is to show how institutions managed catastrophes of disastrous proportions, down through unexpected incidents of 2015/2016. The content draws on the vast past crisis management experience of the presenter and includes the occurrences of the recent protests giving an event timeline. Using responses from interviews with institutional leaders and administrators as well as students will ensure first-hand information on their experiences and the outcomes. Students have tasted the power of organized action and they demand immediate change, if not the revolt will continue. This paper will examine the approaches that guided institutional leaders and their crisis teams and sector crisis response. It will further expand on whether the solutions effectively changed governance in higher education or has it minimized the need for more protests. The conclusion will give an insight into the future of higher education in South Africa from a leadership perspective.

Keywords: crisis, governance, intervention, leadership, strategies, protests

Procedia PDF Downloads 139
996 Assessing Environmental Psychology and Health Awareness in Delhi: A Fundamental Query for Sustainable Urban Living

Authors: Swati Rajput

Abstract:

Environmental psychology explains that the person is a social agent that seeks to extract meaning from their built and natural environment to behave in a particular manner. It also shows the attachment or detachment of people to their environment. Assessing environmental psychology of people is imperative for planners and policy makers for urban planning. The paper investigates the environmental psychology of people living in nine districts of Delhi by calculating and assessing their Environmental Emotional Quotient (EEQ). Emotional Quotient deals with the ability to sense, understand, attach and respond according to the power of emotions. An Environmental Emotional Quotient has been formulated based upon the inventory administered to them. The respondents were asked questions related to their view and emotions about the green spaces, water resource conservation, air and environmental quality. An effort has been made to assess the feeling of belongingness among the residents. Their views were assessed on green spaces, reuse, and recycling of resources and their participation level. They were also been assessed upon health awareness level by considering both preventive and curative segments of health care. It was found that only 12 percent of the people is emotionally attached to their surroundings in the city. The emotional attachment reduces as we move away from the house to housing complex to neighbouring areas and rest of the city. In fact, the emotional quotient goes lower to lowest from house to other ends of the city. It falls abruptly after the radius of 1 km from the residence. The result also shows that nearly 54% respondents accept that there is environment pollution in their area. Around 47.8% respondents in the survey consider that diseases occur because of green cover depiction in their area. Major diseases are to airborne diseases like asthma and bronchitis. Seasonal disease prevalent, which specially occurred from last 3-4 years are malaria, dengue and chikengunya. Survey also shows that only 31 % of respondents visit government hospitals while 69% respondents visit private hospitals or small clinics for healthcare services. The paper suggests the need for environmental sensitive policies and need for green insurance in mega cities like Delhi.

Keywords: environmental psychology, environmental emotional quotient, preventive health care and curative health care, sustainable living

Procedia PDF Downloads 269
995 Exploring Nature and Pattern of Mentoring Practices: A Study on Mentees' Perspectives

Authors: Nahid Parween Anwar, Sadia Muzaffar Bhutta, Takbir Ali

Abstract:

Mentoring is a structured activity which is designed to facilitate engagement between mentor and mentee to enhance mentee’s professional capability as an effective teacher. Both mentor and mentee are important elements of the ‘mentoring equation’ and play important roles in nourishing this dynamic, collaborative and reciprocal relationship. Cluster-Based Mentoring Programme (CBMP) provides an indigenous example of a project which focused on development of primary school teachers in selected clusters with a particular focus on their classroom practice. A study was designed to examine the efficacy of CBMP as part of Strengthening Teacher Education in Pakistan (STEP) project. This paper presents results of one of the components of this study. As part of the larger study, a cross-sectional survey was employed to explore nature and patterns of mentoring process from mentees’ perspectives in the selected districts of Sindh and Balochistan. This paper focuses on the results of the study related to the question: What are mentees’ perceptions of their mentors’ support for enhancing their classroom practice during mentoring process? Data were collected from mentees (n=1148) using a 5-point scale -‘Mentoring for Effective Primary Teaching’ (MEPT). MEPT focuses on seven factors of mentoring: personal attributes, pedagogical knowledge, modelling, feedback, system requirement, development and use of material, and gender equality. Data were analysed using SPSS 20. Mentees perceptions of mentoring practice of their mentors were summarized using mean and standard deviation. Results showed that mean scale scores on mentees’ perceptions of their mentors’ practices fell between 3.58 (system requirement) and 4.55 (personal attributes). Mentees’ perceives personal attribute of the mentor as the most significant factor (M=4.55) towards streamlining mentoring process by building good relationship between mentor and mentees. Furthermore, mentees have shared positive views about their mentors efforts towards promoting gender impartiality (M=4.54) during workshop and follow up visit. Contrary to this, mentees felt that more could have been done by their mentors in sharing knowledge about system requirement (e.g. school policies, national curriculum). Furthermore, some of the aspects in high scoring factors were highlighted by the mentees as areas for further improvement (e.g. assistance in timetabling, written feedback, encouragement to develop learning corners). Mentees’ perceptions of their mentors’ practices may assist in determining mentoring needs. The results may prove useful for the professional development programme for the mentors and mentees for specific mentoring programme in order to enhance practices in primary classrooms in Pakistan. Results would contribute into the body of much-needed knowledge from developing context.

Keywords: cluster-based mentoring programme, mentoring for effective primary teaching (MEPT), professional development, survey

Procedia PDF Downloads 226
994 Italian Emigration to Germany as Represented in the Films Francesco Rosi and Toni Trupia

Authors: Patrizia Palumbo

Abstract:

There are only two Italian films dealing with the Italian emigration to Germany: I magliari directed by Francesco Rosi and Itaker. Vietato agli italiani directed by Toni Trupia. Consequently, the analysis of these two films is essential to any study of the representation of the Italians’ experience in Germany, their hosting country. Francesco Rosi’s I magliari and Toni Trupia’s Itaker. Vietato agli italiani, released respectively in 1959 and in 2012, are both set in the second half of the twentieth century and deal with door to door Italian cloth sellers in German cities, con artists marketing rags as fine fabric to exclusively German customers. However, the perspective of the directors and screenwriters are, if not antithetical, profoundly different. Indeed, from 1959 to 2012, years in which the two films were released, Italy went from being a country of emigration to a country of both immigration (albeit now temporary) and emigration. The paper entitled ‘Representation of the Italian Emigration to Germany in the Films of Francesco Rosi and Toni Trupia’ will analyze, therefore, the two substantially different historical contingencies in which the two movies were produced and cast light on how the same historical reality, that of Italian cloth sellers in German cities, is portrayed by Rosi and Trupia’s films. In particular, it will show how in both films the female character is the site on which power (or the lack of it) is contested. More precisely, it will highlight how the German blond woman in Rosi’s film and the dark haired Albanian woman in Trupia’s film are a reflection of the changes Italy underwent in the last fifty years. Finally, this paper will comment on why Italian emigration to Germany has been overlooked by Italian scholars. Although these scholars are all familiar with many of the films directed by Francesco Rosi, one of the auteurs of Italian cinema, no real critical study of I magliari exists. Rosi’s film, it can be argued, may have aroused the uneasiness engendered by all works dealing with facts evoking shameful and humiliating times. The same is true for Trupia’s film. Even though his Itaker. Vietato agli italiani is set in the sixties, it cannot prescind from the reality of contemporary Italian emigration to Germany and Italy’s economic and political crisis. Bringing attention to Rosi and Trupia’s film seems to be a valid way to rekindle the interest in Italian emigration to Germany, a phenomenon that has contributed to the economic, social and cultural history of both Italy and Germany.

Keywords: film, Germany, history, Italian emigration

Procedia PDF Downloads 329
993 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals

Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen

Abstract:

Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.

Keywords: care, learning, nursing, technology

Procedia PDF Downloads 131
992 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza

Procedia PDF Downloads 307
991 Money Laundering Risk Assessment in the Banking Institutions: An Experimental Approach

Authors: Yusarina Mat-Isa, Zuraidah Mohd-Sanusi, Mohd-Nizal Haniff, Paul A. Barnes

Abstract:

In view that money laundering has become eminent for banking institutions, it is an obligation for the banking institutions to adopt a risk-based approach as the integral component of the accepted policies on anti-money laundering. In doing so, those involved with the banking operations are the most critical group of personnel as these are the people who deal with the day-to-day operations of the banking institutions and are obligated to form a judgement on the level of impending risk. This requirement is extended to all relevant banking institutions staff, such as tellers and customer account representatives for them to identify suspicious customers and escalate it to the relevant authorities. Banking institutions staffs, however, face enormous challenges in identifying and distinguishing money launderers from other legitimate customers seeking genuine banking transactions. Banking institutions staffs are mostly educated and trained with the business objective in mind to serve the customers and are not trained to be “detectives with a detective’s power of observation”. Despite increasing awareness as well as trainings conducted for the banking institutions staff, their competency in assessing money laundering risk is still insufficient. Several gaps have prompted this study including the lack of behavioural perspectives in the assessment of money laundering risk in the banking institutions. Utilizing experimental approach, respondents are randomly assigned within a controlled setting with manipulated situations upon which judgement of the respondents is solicited based on various observations related to the situations. The study suggests that it is imperative that informed judgement is exercised in arriving at the decision to proceed with the banking services required by the customers. Judgement forms a basis of opinion for the banking institution staff to decide if the customers posed money laundering risk. Failure to exercise good judgement could results in losses and absorption of unnecessary risk into the banking institutions. Although the banking institutions are exposed with choices of automated solutions in assessing money laundering risk, the human factor in assessing the risk is indispensable. Individual staff in the banking institutions is the first line of defence who are responsible for screening the impending risk of any customer soliciting for banking services. At the end of the spectrum, the individual role involvement on the subject of money laundering risk assessment is not a substitute for automated solutions as human judgement is inimitable.

Keywords: banking institutions, experimental approach, money laundering, risk assessment

Procedia PDF Downloads 252
990 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 137
989 Consolidating a Regime of State Terror: A Historical Analysis of Necropolitics and the Evolution of Policing Practices in California as a Former Colony, Frontier, and Late-Modern Settler Society

Authors: Peyton M. Provenzano

Abstract:

This paper draws primarily upon the framework of necropolitics and presents California as itself a former frontier, colony, and late-modern settler society. The convergence of these successive and overlapping regimes of state terror is actualized and traceable through an analysis of historical and contemporary police practices. At the behest of the Spanish Crown and with the assistance of the Spanish military, the Catholic Church led the original expedition to colonize California. The indigenous populations of California were subjected to brutal practices of confinement and enslavement at the missions. After the annex of California by the United States, the western-most territory became an infamous frontier where new settlers established vigilante militias to enact violence against indigenous populations to protect their newly stolen land. Early mining settlements sought to legitimize and fund vigilante violence by wielding the authority of rudimentary democratic structures. White settlers circulated petitions for funding to establish a volunteer company under California’s Militia Law for ‘protection’ against the local indigenous populations. The expansive carceral practices of Los Angelinos at the turn of the 19th century exemplify the way in which California solidified its regime of exclusion as a white settler society. Drawing on recent scholarship that queers the notion of biopower and names police as street-level sovereigns, the police murder of Kayla Moore is understood as the latest manifestation of a carceral regime of exclusion and genocide. Kayla Moore was an African American transgender woman living with a mental health disability that was murdered by Berkeley police responding to a mental health crisis call in 2013. The intersectionality of Kayla’s identity made her hyper-vulnerable to state-sanctioned violence. Kayla was a victim not only of the explicitly racial biopower of police, nor the regulatory state power of necropolitics but of the ‘asphyxia’ that was intended to invisibilize both her life and her murder.

Keywords: asphyxia, biopower, california, carceral state, genocide, necropolitics, police, police violence

Procedia PDF Downloads 127
988 Analysis and Optimized Design of a Packaged Liquid Chiller

Authors: Saeed Farivar, Mohsen Kahrom

Abstract:

The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.

Keywords: optimization, packaged liquid chiller, performance, simulation

Procedia PDF Downloads 266
987 Preparedness is Overrated: Community Responses to Floods in a Context of (Perceived) Low Probability

Authors: Kim Anema, Matthias Max, Chris Zevenbergen

Abstract:

For any flood risk manager the 'safety paradox' has to be a familiar concept: low probability leads to a sense of safety, which leads to more investments in the area, which leads to higher potential consequences: keeping the aggregated risk (probability*consequences) at the same level. Therefore, it is important to mitigate potential consequences apart from probability. However, when the (perceived) probability is so low that there is no recognizable trend for society to adapt to, addressing the potential consequences will always be the lagging point on the agenda. Preparedness programs fail because of lack of interest and urgency, policy makers are distracted by their day to day business and there's always a more urgent issue to spend the taxpayer's money on. The leading question in this study was how to address the social consequences of flooding in a context of (perceived) low probability. Disruptions of everyday urban life, large or small, can be caused by a variety of (un)expected things - of which flooding is only one possibility. Variability like this is typically addressed with resilience - and we used the concept of Community Resilience as the framework for this study. Drawing on face to face interviews, an extensive questionnaire and publicly available statistical data we explored the 'whole society response' to two recent urban flood events; the Brisbane Floods (AUS) in 2011 and the Dresden Floods (GE) in 2013. In Brisbane, we studied how the societal impacts of the floods were counteracted by both authorities and the public, and in Dresden we were able to validate our findings. A large part of the reactions, both public as institutional, to these two urban flood events were not fuelled by preparedness or proper planning. Instead, more important success factors in counteracting social impacts like demographic changes in neighborhoods and (non-)economic losses were dynamics like community action, flexibility and creativity from authorities, leadership, informal connections and a shared narrative. These proved to be the determining factors for the quality and speed of recovery in both cities. The resilience of the community in Brisbane was good, due to (i) the approachability of (local) authorities, (ii) a big group of ‘secondary victims’ and (iii) clear leadership. All three of these elements were amplified by the use of social media and/ or web 2.0 by both the communities and the authorities involved. The numerous contacts and social connections made through the web were fast, need driven and, in their own way, orderly. Similarly in Dresden large groups of 'unprepared', ad hoc organized citizens managed to work together with authorities in a way that was effective and speeded up recovery. The concept of community resilience is better fitted than 'social adaptation' to deal with the potential consequences of an (im)probable flood. Community resilience is built on capacities and dynamics that are part of everyday life and which can be invested in pre-event to minimize the social impact of urban flooding. Investing in these might even have beneficial trade-offs in other policy fields.

Keywords: community resilience, disaster response, social consequences, preparedness

Procedia PDF Downloads 342
986 Failing Regeneration, Displacement, and Continued Consequences on Future Urban Planning Processes in Distressed Neighborhoods in Tehran

Authors: Razieh Rezabeigi Sani, Alireza Farahani, Mahdi Haghi

Abstract:

Displacement, local discontent, and forced exclusion have become prominent parts of urban regeneration activities in the Global North and South. This paper discusses the processes of massive displacement and neighborhood alteration as the consequences of a large-scale political/ideological placemaking project in central Tehran that transformed people's daily lives in surrounding neighborhoods. The conversion of Imam Hussein Square and connecting 17-Shahrivar Street to a pedestrian plaza in 2016 resulted in adjacent neighborhoods' physical, social, and economic degradation. The project has downgraded the economic and social characteristics of urban life in surrounding neighborhoods, commercialized residential land uses, displaced local people and businesses, and created unprecedented housing modes. This research has been conducted in two stages; first, after the project's implementation between 2017-2018, and second, when the street was reopened after local protests in 2021. In the first phase, 50+ on-site interviews were organized with planners, managers, and dwellers about the decision-making processes, design, and project implementation. We find that the project was based on the immediate political objectives and top-down power exertion of the local government in creating exclusive spaces (for religious ceremonies) without considering locals' knowledge, preferences, lifestyles, and everyday interactions. In the continued research in 2021, we utilized data gathered in facilitation activities and several meetings and interviews with local inhabitants and businesses to explore, design, and implement initiatives for bottom-up planning in these neighborhoods. The top-down and product-oriented (rather than process-oriented) planning, dependency on municipal financing rather than local partnerships, and lack of public participation proved to have continued effects on local participation. The paper concludes that urban regeneration projects must be based on the participation of different private/public actors, sustainable financial resources, and overall social and spatial analysis of the peripheral area before interventions.

Keywords: displacement, urban regeneration, distressed neighborhoods, ideological placemaking, Tehran

Procedia PDF Downloads 93
985 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 345
984 Simulation of Optimum Sculling Angle for Adaptive Rowing

Authors: Pornthep Rachnavy

Abstract:

The purpose of this paper is twofold. First, we believe that there are a significant relationship between sculling angle and sculling style among adaptive rowing. Second, we introduce a methodology used for adaptive rowing, namely simulation, to identify effectiveness of adaptive rowing. For our study we simulate the arms only single scull of adaptive rowing. The method for rowing fastest under the 1000 meter was investigated by study sculling angle using the simulation modeling. A simulation model of a rowing system was developed using the Matlab software package base on equations of motion consist of many variation for moving the boat such as oars length, blade velocity and sculling style. The boat speed, power and energy consumption on the system were compute. This simulation modeling can predict the force acting on the boat. The optimum sculling angle was performing by computer simulation for compute the solution. Input to the model are sculling style of each rower and sculling angle. Outputs of the model are boat velocity at 1000 meter. The present study suggests that the optimum sculling angle exist depends on sculling styles. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the first style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the second style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the third style is -51.57 and 28.65 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the fourth style is -45.84 and 34.38 degree. A theoretical simulation for rowing has been developed and presented. The results suggest that it may be advantageous for the rowers to select the sculling angles proper to sculling styles. The optimum sculling angles of the rower depends on the sculling styles made by each rower. The investigated of this paper can be concludes in three directions: 1;. There is the optimum sculling angle in arms only single scull of adaptive rowing. 2. The optimum sculling angles depend on the sculling styles. 3. Computer simulation of rowing can identify opportunities for improving rowing performance by utilizing the kinematic description of rowing. The freedom to explore alternatives in speed, thrust and timing with the computer simulation will provide the coach with a tool for systematic assessments of rowing technique In addition, the ability to use the computer to examine the very complex movements during rowing will help both the rower and the coach to conceptualize the components of movements that may have been previously unclear or even undefined.

Keywords: simulation, sculling, adaptive, rowing

Procedia PDF Downloads 457
983 Experimental Study of Nucleate Pool Boiling Heat Transfer Characteristics on Laser-Processed Copper Surfaces of Different Patterns

Authors: Luvindran Sugumaran, Mohd Nashrul Mohd Zubir, Kazi Md Salim Newaz, Tuan Zaharinie Tuan Zahari, Suazlan Mt Aznam, Aiman Mohd Halil

Abstract:

With the fast growth of integrated circuits and the trend towards making electronic devices smaller, the heat dissipation load of electronic devices has continued to go over the limit. The high heat flux element would not only harm the operation and lifetime of the equipment but would also impede the performance upgrade brought about by the iteration of technological updates, which would have a direct negative impact on the economic and production cost benefits of rising industries. Hence, in high-tech industries like radar, information and communication, electromagnetic power, and aerospace, the development and implementation of effective heat dissipation technologies were urgently required. Pool boiling is favored over other cooling methods because of its capacity to dissipate a high heat flux at a low wall superheat without the usage of mechanical components. Enhancing the pool boiling performance by increasing the heat transfer coefficient via surface modification techniques has received a lot of attention. There are several surface modification methods feasible today, but the stability and durability of surface modification are the greatest priority. Thus, laser machining is an interesting choice for surface modification due to its low production cost, high scalability, and repeatability. In this study, different patterns of laser-processed copper surfaces are fabricated to investigate the nucleate pool boiling heat transfer performance of distilled water. The investigation showed that there is a significant enhancement in the pool boiling heat transfer performance of the laser-processed surface compared to the reference surface due to the notable increase in nucleation frequency and nucleation site density. It was discovered that the heat transfer coefficients increased when both the surface area ratio and the ratio of peak-to-valley height of the microstructure were raised. It is believed that the development of microstructures on the surface as a result of laser processing is the primary factor in the enhancement of heat transfer performance.

Keywords: heat transfer coefficient, laser processing, micro structured surface, pool boiling

Procedia PDF Downloads 76
982 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.

Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing

Procedia PDF Downloads 167
981 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 148