Search results for: performance marketing
530 Hypersensitivity Reactions Following Intravenous Administration of Contrast Medium
Authors: Joanna Cydejko, Paulina Mika
Abstract:
Hypersensitivity reactions are side effects of medications that resemble an allergic reaction. Anaphylaxis is a generalized, severe allergic reaction of the body caused by exposure to a specific agent at a dose tolerated by a healthy body. The most common causes of anaphylaxis are food (about 70%), Hymenoptera venoms (22%), and medications (7%), despite detailed diagnostics in 1% of people, the cause of the anaphylactic reaction was not indicated. Contrast media are anaphylactic agents of unknown mechanism. Hypersensitivity reactions can occur with both immunological and non-immunological mechanisms. Symptoms of anaphylaxis occur within a few seconds to several minutes after exposure to the allergen. Contrast agents are chemical compounds that make it possible to visualize or improve the visibility of anatomical structures. In the diagnosis of computed tomography, the preparations currently used are derivatives of the triiodide benzene ring. Pharmacokinetic and pharmacodynamic properties, i.e., their osmolality, viscosity, low chemotoxicity and high hydrophilicity, have an impact on better tolerance of the substance by the patient's body. In MRI diagnostics, macrocyclic gadolinium contrast agents are administered during examinations. The aim of this study is to present the results of the number and severity of anaphylactic reactions that occurred in patients in all age groups undergoing diagnostic imaging with intravenous administration of contrast agents. In non-ionic iodine CT and in macrocyclic gadolinium MRI. A retrospective assessment of the number of adverse reactions after contrast administration was carried out on the basis of data from the Department of Radiology of the University Clinical Center in Gdańsk, and it was assessed whether their different physicochemical properties had an impact on the incidence of acute complications. Adverse reactions are divided according to the severity of the patient's condition and the diagnostic method used in a given patient. Complications following the administration of a contrast medium in the form of acute anaphylaxis accounted for less than 0.5% of all diagnostic procedures performed with the use of a contrast agent. In the analysis period from January to December 2022, 34,053 CT scans and 15,279 MRI examinations with the use of contrast medium were performed. The total number of acute complications was 21, of which 17 were complications of iodine-based contrast agents and 5 of gadolinium preparations. The introduction of state-of-the-art contrast formulations was an important step toward improving the safety and tolerability of contrast agents used in imaging. Currently, contrast agents administered to patients are considered to be one of the best-tolerated preparations used in medicine. However, like any drug, they can be responsible for the occurrence of adverse reactions resulting from their toxic effects. The increase in the number of imaging tests performed with the use of contrast agents has a direct impact on the number of adverse events associated with their administration. However, despite the low risk of anaphylaxis, this risk should not be marginalized. The growing threat associated with the mass performance of radiological procedures with the use of contrast agents forces the knowledge of the rules of conduct in the event of symptoms of hypersensitivity to these preparations.Keywords: anaphylactic, contrast medium, diagnostic, medical imagine
Procedia PDF Downloads 62529 Implementation of Active Recovery at Immediate, 12 and 24 Hours Post-Training in Young Soccer Players
Authors: C. Villamizar, M. Serrato
Abstract:
In the pursuit of athletic performance, the role of physical training which is determined by a number of charges or taxes on physiological stress and musculoskeletal systems of the human body generated by the intensity and duration is fundamental. Given the physical demands of these activities both training and competitive must take into account the optimal relationship with a straining process recovery post favoring the process of overcompensation which aims to facilitate the return and rising energy potential and protein synthesis also of different tissues. Allowing muscle function returns to baseline or pre-exercise states. If this recovery process is not performed or is not allowed in a proper way, will result in an increased state of fatigue. Active recovery, is one of the strategies implemented in the sport for a return to pre-exercise physiological states. However, there are some adverse assumptions regarding the negative effects, as is the possibility of increasing the degradation of muscle glycogen and thus delaying the synthesis thereof. For them, it is necessary to investigate what would be the effects generated application made at different times after the effort. The aim of this study was to determine the effects of active recovery post effort made at three different times: immediately, at 12 and 24 hours on biochemical markers creatine kinase in youth soccer player’s categories. A randomized controlled trial with allocation to three groups was performed: A. active recovery immediately after the effort; B. active recovery performed at 12 hours after the effort; C. active recovery made at 24 hours after the effort. This study included 27 subjects belonging to a Colombian soccer team of the second division. Vital signs, weight, height, BMI, the percentage of muscle mass, fat mass percentage, personal medical history, and family were valued. The velocity, explosive force and Creatin Kinase (CK) in blood were tested before and after interventions. SAFT 90 protocol (Soccer Field specific Aerobic Test) was applied to participants for generating fatigue. CK samples were taken one hour before the application of the fatigue test, one hour after the fatigue protocol and 48 of the initial CK sample. Mean age was 18.5 ± 1.1 years old. Improvements in jumping and speed recovery the 3 groups (p < 0.05), but no statistically significant differences between groups was observed after recuperation. In all participants, there was a significant increment of CK when applied SAFT 90 in all the groups (median 103.1-111.1). The CK measurement after 48 hours reflects a recovery in all groups, however the group C, a decline below baseline levels of -55.5 (-96.3 /-20.4) which is a significant find. Other research has shown that CK does not return quickly to their baseline, but our study shows that active recovery favors the clearance of CK and also to perform recovery 24 hours after the effort generates higher clearance of this biomarker.Keywords: active recuperation, creatine phosphokinase, post training, young soccer players
Procedia PDF Downloads 160528 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees
Authors: Alexandru-Ion Marinescu
Abstract:
There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution
Procedia PDF Downloads 118527 Development and Experimental Evaluation of a Semiactive Friction Damper
Authors: Juan S. Mantilla, Peter Thomson
Abstract:
Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.Keywords: earthquake response, friction damper, semiactive control, shaking table
Procedia PDF Downloads 378526 Managing Climate Change: Vulnerability Reduction or Resilience Building
Authors: Md Kamrul Hassan
Abstract:
Adaptation interventions are the common response to manage the vulnerabilities of climate change. The nature of adaptation intervention depends on the degree of vulnerability and the capacity of a society. The coping interventions can take the form of hard adaptation – utilising technologies and capital goods like dykes, embankments, seawalls, and/or soft adaptation – engaging knowledge and information sharing, capacity building, policy and strategy development, and innovation. Hard adaptation is quite capital intensive but provides immediate relief from climate change vulnerabilities. This type of adaptation is not real development, as the investment for the adaptation cannot improve the performance – just maintain the status quo of a social or ecological system, and often lead to maladaptation in the long-term. Maladaptation creates a two-way loss for a society – interventions bring further vulnerability on top of the existing vulnerability and investment for getting rid of the consequence of interventions. Hard adaptation is popular to the vulnerable groups, but it focuses so much on the immediate solution and often ignores the environmental issues and future risks of climate change. On the other hand, soft adaptation is education oriented where vulnerable groups learn how to live with climate change impacts. Soft adaptation interventions build the capacity of vulnerable groups through training, innovation, and support, which might enhance the resilience of a system. In consideration of long-term sustainability, soft adaptation can contribute more to resilience than hard adaptation. Taking a developing society as the study context, this study aims to investigate and understand the effectiveness of the adaptation interventions of the coastal community of Sundarbans mangrove forest in Bangladesh. Applying semi-structured interviews with a range of Sundarbans stakeholders including community residents, tourism demand-supply side stakeholders, and conservation and management agencies (e.g., Government, NGOs and international agencies) and document analysis, this paper reports several key insights regarding climate change adaptation. Firstly, while adaptation interventions may offer a short-term to medium-term solution to climate change vulnerabilities, interventions need to be revised for long-term sustainability. Secondly, soft adaptation offers advantages in terms of resilience in a rapidly changing environment, as it is flexible and dynamic. Thirdly, there is a challenge to communicate to educate vulnerable groups to understand more about the future effects of hard adaptation interventions (and the potential for maladaptation). Fourthly, hard adaptation can be used if the interventions do not degrade the environmental balance and if the investment of interventions does not exceed the economic benefit of the interventions. Overall, the goal of an adaptation intervention should be to enhance the resilience of a social or ecological system so that the system can with stand present vulnerabilities and future risks. In order to be sustainable, adaptation interventions should be designed in such way that those can address vulnerabilities and risks of climate change in a long-term timeframe.Keywords: adaptation, climate change, maladaptation, resilience, Sundarbans, sustainability, vulnerability
Procedia PDF Downloads 194525 Design of Nano-Reinforced Carbon Fiber Reinforced Plastic Wheel for Lightweight Vehicles with Integrated Electrical Hub Motor
Authors: Davide Cocchi, Andrea Zucchelli, Luca Raimondi, Maria Brugo Tommaso
Abstract:
The increasing attention is given to the issues of environmental pollution and climate change is exponentially stimulating the development of electrically propelled vehicles powered by renewable energy, in particular, the solar one. Given the small amount of solar energy that can be stored and subsequently transformed into propulsive energy, it is necessary to develop vehicles with high mechanical, electrical and aerodynamic efficiencies along with reduced masses. The reduction of the masses is of fundamental relevance especially for the unsprung masses, that is the assembly of those elements that do not undergo a variation of their distance from the ground (wheel, suspension system, hub, upright, braking system). Therefore, the reduction of unsprung masses is fundamental in decreasing the rolling inertia and improving the drivability, comfort, and performance of the vehicle. This principle applies even more in solar propelled vehicles, equipped with an electric motor that is connected directly to the wheel hub. In this solution, the electric motor is integrated inside the wheel. Since the electric motor is part of the unsprung masses, the development of compact and lightweight solutions is of fundamental importance. The purpose of this research is the design development and optimization of a CFRP 16 wheel hub motor for solar propulsion vehicles that can carry up to four people. In addition to trying to maximize aspects of primary importance such as mass, strength, and stiffness, other innovative constructive aspects were explored. One of the main objectives has been to achieve a high geometric packing in order to ensure a reduced lateral dimension, without reducing the power exerted by the electric motor. In the final solution, it was possible to realize a wheel hub motor assembly completely comprised inside the rim width, for a total lateral overall dimension of less than 100 mm. This result was achieved by developing an innovative connection system between the wheel and the rotor with a double purpose: centering and transmission of the driving torque. This solution with appropriate interlocking noses allows the transfer of high torques and at the same time guarantees both the centering and the necessary stiffness of the transmission system. Moreover, to avoid delamination in critical areas, evaluated by means of FEM analysis using 3D Hashin damage criteria, electrospun nanofibrous mats have been interleaved between CFRP critical layers. In order to reduce rolling resistance, the rim has been designed to withstand high inflation pressure. Laboratory tests have been performed on the rim using the Digital Image Correlation technique (DIC). The wheel has been tested for fatigue bending according to E/ECE/324 R124e.Keywords: composite laminate, delamination, DIC, lightweight vehicle, motor hub wheel, nanofiber
Procedia PDF Downloads 214524 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application
Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu
Abstract:
Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.Keywords: photodetection, p-type doping, multilayers, MoS₂
Procedia PDF Downloads 104523 Bank Failures: A Question of Leadership
Authors: Alison L. Miles
Abstract:
Almost all major financial institutions in the world suffered losses due to the financial crisis of 2007, but the extent varied widely. The causes of the crash of 2007 are well documented and predominately focus on the role and complexity of the financial markets. The dominant theme of the literature suggests the causes of the crash were a combination of globalization, financial sector innovation, moribund regulation and short termism. While these arguments are undoubtedly true, they do not tell the whole story. A key weakness in the current analysis is the lack of consideration of those leading the banks pre and during times of crisis. This purpose of this study is to examine the possible link between the leadership styles and characteristics of the CEO, CFO and chairman and the financial institutions that failed or needed recapitalization. As such, it contributes to the literature and debate on international financial crises and systemic risk and also to the debate on risk management and regulatory reform in the banking sector. In order to first test the proposition (p1) that there are prevalent leadership characteristics or traits in financial institutions, an initial study was conducted using a sample of the top 65 largest global banks and financial institutions according to the Banker Top 1000 banks 2014. Secondary data from publically available and official documents, annual reports, treasury and parliamentary reports together with a selection of press articles and analyst meeting transcripts was collected longitudinally from the period 1998 to 2013. A computer aided key word search was used in order to identify the leadership styles and characteristics of the chairman, CEO and CFO. The results were then compared with the leadership models to form a picture of leadership in the sector during the research period. As this resulted in separate results that needed combining, SPSS data editor was used to aggregate the results across the studies using the variables ‘leadership style’ and ‘company financial performance’ together with the size of the company. In order to test the proposition (p2) that there was a prevalent leadership style in the banks that failed and the proposition (P3) that this was different to those that did not, further quantitative analysis was carried out on the leadership styles of the chair, CEO and CFO of banks that needed recapitalization, were taken over, or required government bail-out assistance during 2007-8. These included: Lehman Bros, Merrill Lynch, Royal Bank of Scotland, HBOS, Barclays, Northern Rock, Fortis and Allied Irish. The findings show that although regulatory reform has been a key mechanism of control of behavior in the banking sector, consideration of the leadership characteristics of those running the board are a key factor. They add weight to the argument that if each crisis is met with the same pattern of popular fury with the financier, increased regulation, followed by back to business as usual, the cycle of failure will always be repeated and show that through a different lens, new paradigms can be formed and future clashes avoided.Keywords: banking, financial crisis, leadership, risk
Procedia PDF Downloads 318522 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation
Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen
Abstract:
Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling
Procedia PDF Downloads 89521 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint
Authors: Juliane Spaak
Abstract:
A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient
Procedia PDF Downloads 73520 Numerical Simulation on Two Components Particles Flow in Fluidized Bed
Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi
Abstract:
Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow
Procedia PDF Downloads 326519 Reliable and Error-Free Transmission through Multimode Polymer Optical Fibers in House Networks
Authors: Tariq Ahamad, Mohammed S. Al-Kahtani, Taisir Eldos
Abstract:
Optical communications technology has made enormous and steady progress for several decades, providing the key resource in our increasingly information-driven society and economy. Much of this progress has been in finding innovative ways to increase the data carrying capacity of a single optical fiber. In this research article we have explored basic issues in terms of security and reliability for secure and reliable information transfer through the fiber infrastructure. Conspicuously, one potentially enormous source of improvement has however been left untapped in these systems: fibers can easily support hundreds of spatial modes, but today’s commercial systems (single-mode or multi-mode) make no attempt to use these as parallel channels for independent signals. Bandwidth, performance, reliability, cost efficiency, resiliency, redundancy, and security are some of the demands placed on telecommunications today. Since its initial development, fiber optic systems have had the advantage of most of these requirements over copper-based and wireless telecommunications solutions. The largest obstacle preventing most businesses from implementing fiber optic systems was cost. With the recent advancements in fiber optic technology and the ever-growing demand for more bandwidth, the cost of installing and maintaining fiber optic systems has been reduced dramatically. With so many advantages, including cost efficiency, there will continue to be an increase of fiber optic systems replacing copper-based communications. This will also lead to an increase in the expertise and the technology needed to tap into fiber optic networks by intruders. As ever before, all technologies have been subject to hacking and criminal manipulation, fiber optics is no exception. Researching fiber optic security vulnerabilities suggests that not everyone who is responsible for their networks security is aware of the different methods that intruders use to hack virtually undetected into fiber optic cables. With millions of miles of fiber optic cables stretching across the globe and carrying information including but certainly not limited to government, military, and personal information, such as, medical records, banking information, driving records, and credit card information; being aware of fiber optic security vulnerabilities is essential and critical. Many articles and research still suggest that fiber optics is expensive, impractical and hard to tap. Others argue that it is not only easily done, but also inexpensive. This paper will briefly discuss the history of fiber optics, explain the basics of fiber optic technologies and then discuss the vulnerabilities in fiber optic systems and how they can be better protected. Knowing the security risks and knowing the options available may save a company a lot embarrassment, time, and most importantly money.Keywords: in-house networks, fiber optics, security risk, money
Procedia PDF Downloads 420518 Let’s Work It Out: Effects of a Cooperative Learning Approach on EFL Students’ Motivation and Reading Comprehension
Authors: Shiao-Wei Chu
Abstract:
In order to enhance the ability of their graduates to compete in an increasingly globalized economy, the majority of universities in Taiwan require students to pass Freshman English in order to earn a bachelor's degree. However, many college students show low motivation in English class for several important reasons, including exam-oriented lessons, unengaging classroom activities, a lack of opportunities to use English in authentic contexts, and low levels of confidence in using English. Students’ lack of motivation in English classes is evidenced when students doze off, work on assignments from other classes, or use their phones to chat with others, play video games or watch online shows. Cooperative learning aims to address these problems by encouraging language learners to use the target language to share individual experiences, cooperatively complete tasks, and to build a supportive classroom learning community whereby students take responsibility for one another’s learning. This study includes approximately 50 student participants in a low-proficiency Freshman English class. Each week, participants will work together in groups of between 3 and 4 students to complete various in-class interactive tasks. The instructor will employ a reward system that incentivizes students to be responsible for their own as well as their group mates’ learning. The rewards will be based on points that team members earn through formal assessment scores as well as assessment of their participation in weekly in-class discussions. The instructor will record each team’s week-by-week improvement. Once a team meets or exceeds its own earlier performance, the team’s members will each receive a reward from the instructor. This cooperative learning approach aims to stimulate EFL freshmen’s learning motivation by creating a supportive, low-pressure learning environment that is meant to build learners’ self-confidence. Students will practice all four language skills; however, the present study focuses primarily on the learners’ reading comprehension. Data sources include in-class discussion notes, instructor field notes, one-on-one interviews, students’ midterm and final written reflections, and reading scores. Triangulation is used to determine themes and concerns, and an instructor-colleague analyzes the qualitative data to build interrater reliability. Findings are presented through the researcher’s detailed description. The instructor-researcher has developed this approach in the classroom over several terms, and its apparent success at motivating students inspires this research. The aims of this study are twofold: first, to examine the possible benefits of this cooperative approach in terms of students’ learning outcomes; and second, to help other educators to adapt a more cooperative approach to their classrooms.Keywords: freshman English, cooperative language learning, EFL learners, learning motivation, zone of proximal development
Procedia PDF Downloads 145517 Coastal Modelling Studies for Jumeirah First Beach Stabilization
Authors: Zongyan Yang, Gagan K. Jena, Sankar B. Karanam, Noora M. A. Hokal
Abstract:
Jumeirah First beach, a segment of coastline of length 1.5 km, is one of the popular public beaches in Dubai, UAE. The stability of the beach has been affected by several coastal developmental projects, including The World, Island 2 and La Mer. A comprehensive stabilization scheme comprising of two composite groynes (of lengths 90 m and 125m), modification to the northern breakwater of Jumeirah Fishing Harbour and beach re-nourishment was implemented by Dubai Municipality in 2012. However, the performance of the implemented stabilization scheme has been compromised by La Mer project (built in 2016), which modified the wave climate at the Jumeirah First beach. The objective of the coastal modelling studies is to establish design basis for further beach stabilization scheme(s). Comprehensive coastal modelling studies had been conducted to establish the nearshore wave climate, equilibrium beach orientations and stable beach plan forms. Based on the outcomes of the modeling studies, recommendation had been made to extend the composite groynes to stabilize the Jumeirah First beach. Wave transformation was performed following an interpolation approach with wave transformation matrixes derived from simulations of a possible range of wave conditions in the region. The Dubai coastal wave model is developed with MIKE21 SW. The offshore wave conditions were determined from PERGOS wave data at 4 offshore locations with consideration of the spatial variation. The lateral boundary conditions corresponding to the offshore conditions, at Dubai/Abu Dhabi and Dubai Sharjah borders, were derived with application of LitDrift 1D wave transformation module. The Dubai coastal wave model was calibrated with wave records at monitoring stations operated by Dubai Municipality. The wave transformation matrix approach was validated with nearshore wave measurement at a Dubai Municipality monitoring station in the vicinity of the Jumeirah First beach. One typical year wave time series was transformed to 7 locations in front of the beach to count for the variation of wave conditions which are affected by adjacent and offshore developments. Equilibrium beach orientations were estimated with application of LitDrift by finding the beach orientations with null annual littoral transport at the 7 selected locations. The littoral transport calculation results were compared with beach erosion/accretion quantities estimated from the beach monitoring program (twice a year including bathymetric and topographical surveys). An innovative integral method was developed to outline the stable beach plan forms from the estimated equilibrium beach orientations, with predetermined minimum beach width. The optimal lengths for the composite groyne extensions were recommended based on the stable beach plan forms.Keywords: composite groyne, equilibrium beach orientation, stable beach plan form, wave transformation matrix
Procedia PDF Downloads 263516 An Analysis of Gender Discrimination and Horizontal Hostility among Working Women in Pakistan
Authors: Nadia Noor, Farida Faisal
Abstract:
Horizontal hostility has been identified as a special type of workplace violence and refers to the aggressive behavior inflicted by women towards other women due to gender issues or towards minority group members due to minority issues. Many women, while they want eagerly to succeed and invest invigorated efforts to achieve success, harbor negative feelings for other women to succeed in their career. This phenomenon has been known as Horizontal Violence, Horizontal Hostility, Lateral Violence, Indirect Aggression, or The Tall Poppy Syndrome in Australian culture. Tall Poppy is referred to as a visibly successful individual who attracts envy or hostility due to distinctive characteristics. Therefore, horizontal hostility provides theoretical foundation to examine fierce competition among females than males for their limited access to top level management positions. In Pakistan, gender discrimination persists due to male dominance in the society and women do not enjoy basic equality rights in all aspects of life. They are oppressed at social and organizational level. As Government has been trying to enhance women participation through providing more employment opportunities, provision of peaceful workplace is mandatory that will enable aspiring females to achieve objectives of career success. This research study will help to understand antecedents, dimensions and outcomes of horizontal hostility that hinder career success of competitive females. The present paper is a review paper and various forms of horizontal hostility have been discussed in detail. Different psychological and organizational level drivers of horizontal hostility have been explored through literature. Psychological drivers include oppression, lack of empowerment, learned helplessness and low self-esteem. Organizational level drivers include sticky floor, glass ceiling, toxic work environment and leadership role. Horizontal hostility among working women results in psychological and physical outcomes including stress, low motivation, poor job performance and intention to leave. The study recommends provision of healthy and peaceful work environment that will enable competent women to achieve objectives of career success. In this regard, concrete actions and effective steps are required to promote gender equality at social and organizational level. The need is to ensure the enforcement of legal frameworks by government agencies in order to provide healthy working environment to women by reducing harassment and violence against them. Organizations must eradicate drivers of horizontal hostility and provide women peaceful work environment. In order to develop coping skills, training and mentoring must be provided to them.Keywords: gender discrimination, glass ceiling, horizontal hostility, oppression
Procedia PDF Downloads 134515 Entrepreneurial Dynamism and Socio-Cultural Context
Authors: Shailaja Thakur
Abstract:
Managerial literature abounds with discussions on business strategies, success stories as well as cases of failure, which provide an indication of the parameters that should be considered in gauging the dynamism of an entrepreneur. Neoclassical economics has reduced entrepreneurship to a mere factor of production, driven solely by the profit motive, thus stripping him of all creativity and restricting his decision making to mechanical calculations. His ‘dynamism’ is gauged simply by the amount of profits he earns, marginalizing any discussion on the means that he employs to attain this objective. With theoretical backing, we have developed an Index of Entrepreneurial Dynamism (IED) giving weights to the different moves that the entrepreneur makes during his business journey. Strategies such as changes in product lines, markets and technology are gauged as very important (weighting of 4); while adaptations in terms of technology, raw materials used, upgradations in skill set are given a slightly lesser weight of 3. Use of formal market analysis, diversification in related products are considered moderately important (weight of 2) and being a first generation entrepreneur, employing managers and having plans to diversify are taken to be only slightly important business strategies (weight of 1). The maximum that an entrepreneur can score on this index is 53. A semi-structured questionnaire is employed to solicit the responses from the entrepreneurs on the various strategies that have been employed by them during the course of their business. Binary as well as graded responses are obtained, weighted and summed up to give the IED. This index was tested on about 150 tribal entrepreneurs in Mizoram, a state of India and was found to be highly effective in gauging their dynamism. This index has universal acceptability but is devoid of the socio-cultural context, which is very central to the success and performance of the entrepreneurs. We hypothesize that a society that respects risk taking takes failures in its stride, glorifies entrepreneurial role models, promotes merit and achievement is one that has a conducive socio- cultural environment for entrepreneurship. For obtaining an idea about the social acceptability, we are putting forth questions related to the social acceptability of business to another set of respondents from different walks of life- bureaucracy, academia, and other professional fields. Similar weighting technique is employed, and index is generated. This index is used for discounting the IED of the respondent entrepreneurs from that region/ society. This methodology is being tested for a sample of entrepreneurs from two very different socio- cultural milieus- a tribal society and a ‘mainstream’ society- with the hypothesis that the entrepreneurs in the tribal milieu might be showing a higher level of dynamism than their counterparts in other regions. An entrepreneur who scores high on IED and belongs to society and culture that holds entrepreneurship in high esteem, might not be in reality as dynamic as a person who shows similar dynamism in a relatively discouraging or even an outright hostile environment.Keywords: index of entrepreneurial dynamism, India, social acceptability, tribal entrepreneurs
Procedia PDF Downloads 258514 Content Monetization as a Mark of Media Economy Quality
Authors: Bela Lebedeva
Abstract:
Characteristics of the Web as a channel of information dissemination - accessibility and openness, interactivity and multimedia news - become wider and cover the audience quickly, positively affecting the perception of content, but blur out the understanding of the journalistic work. As a result audience and advertisers continue migrating to the Internet. Moreover, online targeting allows monetizing not only the audience (as customarily given to traditional media) but also the content and traffic more accurately. While the users identify themselves with the qualitative characteristics of the new market, its actors are formed. Conflict of interests is laid in the base of the economy of their relations, the problem of traffic tax as an example. Meanwhile, content monetization actualizes fiscal interest of the state too. The balance of supply and demand is often violated due to the political risks, particularly in terms of state capitalism, populism and authoritarian methods of governance such social institutions as the media. A unique example of access to journalistic material, limited by monetization of content is a television channel Dozhd' (Rain) in Russian web space. Its liberal-minded audience has a better possibility for discussion. However, the channel could have been much more successful in terms of unlimited free speech. Avoiding state pressure and censorship its management has decided to save at least online performance and monetizing all of the content for the core audience. The study Methodology was primarily based on the analysis of journalistic content, on the qualitative and quantitative analysis of the audience. Reconstructing main events and relationships of actors on the market for the last six years researcher has reached some conclusions. First, under the condition of content monetization the capitalization of its quality will always strive to quality characteristics of user, thereby identifying him. Vice versa, the user's demand generates high-quality journalism. The second conclusion follows the previous one. The growth of technology, information noise, new political challenges, the economy volatility and the cultural paradigm change – all these factors form the content paying model for an individual user. This model defines him as a beneficiary of specific knowledge and indicates the constant balance of supply and demand other conditions being equal. As a result, a new economic quality of information is created. This feature is an indicator of the market as a self-regulated system. Monetized information quality is less popular than that of the Public Broadcasting Service, but this audience is able to make decisions. These very users keep the niche sectors which have more potential of technology development, including the content monetization ways. The third point of the study allows develop it in the discourse of media space liberalization. This cultural phenomenon may open opportunities for the development of social and economic relations architecture both locally and regionally.Keywords: content monetization, state capitalism, media liberalization, media economy, information quality
Procedia PDF Downloads 248513 A High Amylose-Content and High-Yielding Elite Line Is Favorable to Cook 'Nanhan' (Semi-Soft Rice) for Nursing Care Food Particularly for Serving Aged Persons
Authors: M. Kamimukai, M. Bhattarai, B. B. Rana, K. Maeda, H. B. Kc, T. Kawano, M. Murai
Abstract:
Most of the aged people older than 70 have difficulty in chewing and swallowing more or less. According to magnitude of this difficulty, gruel, “nanhan” (semi-soft rice) and ordinary cooked rice are served in general, particularly in sanatoriums and homes for old people in Japan. Nanhan is the name of a cooked rice used in Japan, having softness intermediate between gruel and ordinary cooked rice, which is boiled with intermediate amount of water between those of the latter two kinds of cooked rice. In the present study, nanhan was made in the rate of 240g of water to 100g of milled rice with an electric rice cooker. Murai developed a high amylose-content and high-yielding elite line ‘Murai 79’. Sensory eating-quality test was performed for nanhan and ordinary cooked rice of Murai 79 and the standard variety ‘Hinohikari’ which is a high eating-quality variety representative in southern Japan. Panelists (6 to 14 persons) scored each cooked rice in six items viz. taste, stickiness, hardness, flavor, external appearance and overall evaluation. Grading (-3 ~ +3) in each trait was performed, regarding the value of the standard variety Hinohikari as 0. Paddy rice produced in a farmer’s field in 2013 and 2014 and in an experimental field of Kochi University in 2015 and 2016 were used for the sensory test. According to results of the sensory eating-quality test for nanhan, Murai 79 is higher in overall evaluation than Hinohikari in the four years. The former was less sticky than the latter in the four years, but the former was statistically significantly harder than the latter throughout the four years. In external appearance, the former was significantly higher than the latter in the four years. In the taste, the former was significantly higher than the latter in 2014, but significant difference was not noticed between them in the other three years. There were no significant differences throughout the four years in flavor. Regarding amylose content, Murai 79 is higher by 3.7 and 5.7% than Hinohikari in 2015 and 2016, respectively. As for protein content, Murai 79 was higher than Hinohikari in 2015, but the former was lower than the latter in 2016. Consequently, the nanhan of Murai 79 was harder and less sticky, keeping the shape of grains as compared with that of Hinohikari, which may be due to its higher amylose content. Hence, the nanhan of Murai 79 may be recognized as grains more easily in a human mouth, which could make easier the continuous performance of mastication and deglutition particularly in aged persons. Regarding ordinary cooked rice, Murai 79 was similar to or higher in both overall evaluation and external appearance as compared with Hinohikari, despite its higher hardness and lower stickiness. Additionally, Murai 79 had brown-rice yield of 1.55 times as compared with Hinohikari, suggesting that it would enable to supply inexpensive rice for making nanhan with high quality particularly for aged people in Japan.Keywords: high-amylose content, high-yielding rice line, nanhan, nursing care food, sensory eating quality test
Procedia PDF Downloads 138512 Guests’ Satisfaction and Intention to Revisit Smart Hotels: Qualitative Interviews Approach
Authors: Raymond Chi Fai Si Tou, Jacey Ja Young Choe, Amy Siu Ian So
Abstract:
Smart hotels can be defined as the hotel which has an intelligent system, through digitalization and networking which achieve hotel management and service information. In addition, smart hotels include high-end designs that integrate information and communication technology with hotel management fulfilling the guests’ needs and improving the quality, efficiency and satisfaction of hotel management. The purpose of this study is to identify appropriate factors that may influence guests’ satisfaction and intention to revisit Smart Hotels based on service quality measurement of lodging quality index and extended UTAUT theory. Unified Theory of Acceptance and Use of Technology (UTAUT) is adopted as a framework to explain technology acceptance and use. Since smart hotels are technology-based infrastructure hotels, UTATU theory could be as the theoretical background to examine the guests’ acceptance and use after staying in smart hotels. The UTAUT identifies four key drivers of the adoption of information systems: performance expectancy, effort expectancy, social influence, and facilitating conditions. The extended UTAUT modifies the definitions of the seven constructs for consideration; the four previously cited constructs of the UTAUT model together with three new additional constructs, which including hedonic motivation, price value and habit. Thus, the seven constructs from the extended UTAUT theory could be adopted to understand their intention to revisit smart hotels. The service quality model will also be adopted and integrated into the framework to understand the guests’ intention of smart hotels. There are rare studies to examine the service quality on guests’ satisfaction and intention to revisit in smart hotels. In this study, Lodging Quality Index (LQI) will be adopted to measure the service quality in smart hotels. Using integrated UTAUT theory and service quality model because technological applications and services require using more than one model to understand the complicated situation for customers’ acceptance of new technology. Moreover, an integrated model could provide more perspective insights to explain the relationships of the constructs that could not be obtained from only one model. For this research, ten in-depth interviews are planned to recruit this study. In order to confirm the applicability of the proposed framework and gain an overview of the guest experience of smart hotels from the hospitality industry, in-depth interviews with the hotel guests and industry practitioners will be accomplished. In terms of the theoretical contribution, it predicts that the integrated models from the UTAUT theory and the service quality will provide new insights to understand factors that influence the guests’ satisfaction and intention to revisit smart hotels. After this study identifies influential factors, smart hotel practitioners could understand which factors may significantly influence smart hotel guests’ satisfaction and intention to revisit. In addition, smart hotel practitioners could also provide outstanding guests experience by improving their service quality based on the identified dimensions from the service quality measurement. Thus, it will be beneficial to the sustainability of the smart hotels business.Keywords: intention to revisit, guest satisfaction, qualitative interviews, smart hotels
Procedia PDF Downloads 208511 Fashion Utopias: The Role of Fashion Exhibitions and Fashion Archives to Defining (and Stimulating) Possible Future Fashion Landscapes
Authors: Vittorio Linfante
Abstract:
Utopìa is a term that, since its first appearance in 1516, in Tommaso Moro’s work, has taken on different meanings and forms in various fields: social studies, politics, art, creativity, and design. The utopias, although of short duration and in their apparent impossibility, have been able to give a shape to the future, laying the foundations for our present and the future of the next generations. The Twentieth century was the historical period crossed by many changes, and it saw the most significant number of utopias not only social, political, and scientific but also artistic, architectural, in design, communication, and, last but not least, in fashion. Over the years, fashion has been able to interpret various utopistic impulses giving form to the most futuristic visions. From the Manifesto del Vestito by Giacomo Balla, through the functional experiments that led to the Tuta by Thayath and the Varst by Aleksandr Rodčenko and Varvara Stepanova, through the Space Age visions of Rudi Gernreich, Paco Rabanne and Pierre Cardin, and the Archizoom’s political actions and their fashion project Vestirsi è facile. Experiments that have continued to the present days through the (sometimes) excessive visions of Hussein Chalayan, Alexander McQueen, and Gareth Pugh or those that are more anchored to the market (but no fewer innovative and visionaries) by Prada, Chanel, and Raf Simmons. If, as Bauman states, it is true that we have entered in a phase of Retrotopia characterized by the inability to think about new forms of the future; it is necessary, more than ever, to redefine the role of history, of its narration and its mise en scène, within the contemporary creative process. A process that increasingly requires an in-depth knowledge of the past for the definition of a renewed discourse about design processes. A discourse in which words like archive, exhibition, curating, revival, vintage, and costume take on new meanings. The paper aims to investigate–through case studies, research, and professional projects–the renewed role of curating and preserving fashion artefacts. A renewed role that–in an era of Retrotopia–museums, exhibitions, and archives can (and must) assume, to contribute to the definition of new design paradigms, capable of overcoming the traditional categories of revival or costume in favour of a more contemporary “mash-up” approach. Mash-up in which past and present, craftsmanship and new technologies, revival and experimentation merge seamlessly. In this perspective, dresses (as well as fashion accessories) should be considered not only as finished products but as artefacts capable of talking about the past and of producing unpublished new stories at the same time. Archives, exhibitions (academic and not), and museums thus become powerful sources of inspiration for fashion: places and projects capable of generating innovation, becoming active protagonists of the contemporary fashion design processes.Keywords: heritage, history, costume and fashion interface, performance, language, design research
Procedia PDF Downloads 114510 The Importance of SEEQ in Teaching Evaluation of Undergraduate Engineering Education in India
Authors: Aabha Chaubey, Bani Bhattacharya
Abstract:
Evaluation of the quality of teaching in engineering education in India needs to be conducted on a continuous basis to achieve the best teaching quality in technical education. Quality teaching is an influential factor in technical education which impacts largely on learning outcomes of the students. Present study is not exclusively theory-driven, but it draws on various specific concepts and constructs in the domain of technical education. These include teaching and learning in higher education, teacher effectiveness, and teacher evaluation and performance management in higher education. Student Evaluation of Education Quality (SEEQ) was proposed as one of the evaluation instruments of the quality teaching in engineering education. SEEQ is one of the popular and standard instrument widely utilized all over the world and bears the validity and reliability in educational world. The present study was designed to evaluate the teaching quality through SEEQ in the context of technical education in India, including its validity and reliability based on the collected data. The multiple dimensionality of SEEQ that is present in every teaching and learning process made it quite suitable to collect the feedback of students regarding the quality of instructions and instructor. The SEEQ comprises of 9 original constructs i.e.; learning value, teacher enthusiasm, organization, group interaction, and individual rapport, breadth of coverage, assessment, assignments and overall rating of particular course and instructor with total of 33 items. In the present study, a total of 350 samples comprising first year undergraduate students from Indian Institute of Technology, Kharagpur (IIT, Kharagpur, India) were included for the evaluation of the importance of SEEQ. They belonged to four different courses of different streams of engineering studies. The above studies depicted the validity and reliability of SEEQ was based upon the collected data. This further needs Confirmatory Factor Analysis (CFA) and Analysis of Moment structure (AMOS) for various scaled instrument like SEEQ Cronbach’s alpha which are associated with SPSS for the examination of the internal consistency. The evaluation of the effectiveness of SEEQ in CFA is implemented on the basis of fit indices such as CMIN/df, CFI, GFI, AGFI and RMSEA readings. The major findings of this study showed the fitness indices such as ChiSq = 993.664,df = 390,ChiSq/df = 2.548,GFI = 0.782,AGFI = 0.736,CFI = 0.848,RMSEA = 0.062,TLI = 0.945,RMR = 0.029,PCLOSE = 0.006. The final analysis of the fit indices presented positive construct validity and stability, on the other hand a higher reliability was also depicted which indicated towards internal consistency. Thus, the study suggests the effectivity of SEEQ as the indicator of the quality evaluation instrument in teaching-learning process in engineering education in India. Therefore, it is expected that with the continuation of this research in engineering education there remains a possibility towards the betterment of the quality of the technical education in India. It is also expected that this study will provide an empirical and theoretical logic towards locating a construct or factor related to teaching, which has the greatest impact on teaching and learning process in a particular course or stream in engineering education.Keywords: confirmatory factor analysis, engineering education, SEEQ, teaching and learning process
Procedia PDF Downloads 421509 Effects of Environmental and Genetic Factors on Growth Performance, Fertility Traits and Milk Yield/Composition in Saanen Goats
Authors: Deniz Dincel, Sena Ardicli, Hale Samli, Mustafa Ogan, Faruk Balci
Abstract:
The aim of the study was to determine the effects of some environmental and genetic factors on growth, fertility traits, milk yield and composition in Saanen goats. For this purpose, the total of 173 Saanen goats and kids were investigated for growth, fertility and milk traits in Marmara Region of Turkey. Fertility parameters (n=70) were evaluated during two years. Milk samples were collected during the lactation and the milk yield/components (n=59) of each goat were calculated. In terms of CSN3 and AGPAT6 gene; the genotypes were defined by PCR-RFLP. Saanen kids (n=86-112) were measured from birth to 6 months of life. The birth, weaning, 60ᵗʰ, 90ᵗʰ, 120ᵗʰ and 180tᵗʰ days of average live weights were calculated. The effects of maternal age on pregnancy rate (p < 0.05), birth rate (p < 0.05), infertility rate (p < 0.05), single born kidding (p < 0.001), twinning rate (p < 0.05), triplet rate (p < 0.05), survival rate of kids until weaning (p < 0.05), number of kids per parturition (p < 0.01) and number of kids per mating (p < 0.01) were found significant. The impacts of year on birth rate (p < 0.05), abortion rate (p < 0.001), single born kidding (p < 0.01), survival rate of kids until weaning (p < 0.01), number of kids per mating (p < 0.01) were found significant for fertility traits. The impacts of lactation length on all milk yield parameters (lactation milk, protein, fat, totally solid, solid not fat, casein and lactose yield) (p < 0.001) were found significant. The effects of age on all milk yield parameters (lactation milk, protein, fat, total solid, solid not fat, casein and lactose yield) (p < 0.001), protein rate (p < 0.05), fat rate (p < 0.05), total solid rate (p < 0.01), solid not fat rate (p < 0.05), casein rate (p < 0.05) and lactation length (p < 0.01), were found significant too. However, the effect of AGPAT6 gene on milk yield and composition was not found significant in Saanen goats. The herd was found monomorphic (FF) for CSN3 gene. The effects of sex on live weights until 90ᵗʰ days of life (birth, weaning and 60ᵗʰ day of average weight) were found significant statistically (p < 0.001). The maternal age affected only birth weight (p < 0,001). The effects month at birth on all of the investigated day [the birth, 120ᵗʰ, 180ᵗʰ days (p < 0.05); the weaning, 60ᵗʰ, 90ᵗʰ days (p < 0,001)] were found significant. The birth type was found significant on the birth (p < 0,001), weaning (p < 0,01), 60ᵗʰ (p < 0,01) and 90ᵗʰ (p < 0,01) days of average live weights. As a result, screening the other regions of CSN3, AGPAT6 gene and also investigation the phenotypic association of them should be useful to clarify the efficiency of target genes. Environmental factors such as maternal age, year, sex and birth type were found significant on some growth, fertility and milk traits in Saanen goats. So consideration of these factors could be used as selection criteria in dairy goat breeding.Keywords: fertility, growth, milk yield, Saanen goats
Procedia PDF Downloads 166508 Assessment Environmental and Economic of Yerba Mate as a Feed Additive on Feedlot Lamb
Authors: Danny Alexander R. Moreno, Gustavo L. Sartorello, Yuli Andrea P. Bermudez, Richard R. Lobo, Ives Claudio S. Bueno, Augusto H. Gameiro
Abstract:
Meat production is a significant sector for Brazil's economy; however, the agricultural segment has suffered censure regarding the negative impacts on the environment, which consequently results in climate change. Therefore, it is essential the implementation of nutritional strategies that can improve the environmental performance of livestock. This research aimed to estimate the environmental impact and profitability of the use of yerba mate extract (Ilex paraguariensis) as an additive in the feeding of feedlot lamb. Thirty-six castrated male lambs (average weight of 23.90 ± 3.67 kg and average age of 75 days) were randomly assigned to four experimental diets with different levels of inclusion of yerba mate extract (0, 1, 2, and 4 %) based on dry matter. The animals were confined for fifty-three days and fed with 60:40 corn silage to concentrate ratio. As an indicator of environmental impact, the carbon footprint (CF) was measured as kg of CO₂ equivalent (CO₂-eq) per kg of body weight produced (BWP). The greenhouse gas (GHG) emissions such as methane (CH₄) generated from enteric fermentation, were calculated using the sulfur hexafluoride gas tracer (SF₆) technique; while the CH₄, nitrous oxide (N₂O - emissions generated by feces and urine), and carbon dioxide (CO₂ - emissions generated by concentrate and silage processing) were estimated using the Intergovernmental Panel on Climate Change (IPCC) methodology. To estimate profitability, the gross margin was used, which is the total revenue minus the total cost; the latter is composed of the purchase of animals and food. The boundaries of this study considered only the lamb fattening system. The enteric CH₄ emission from the lamb was the largest source of on-farm GHG emissions (47%-50%), followed by CH₄ and N₂O emissions from manure (10%-20%) and CO₂ emission from the concentrate, silage, and fossil energy (17%-5%). The treatment that generated the least environmental impact was the group with 4% of yerba mate extract (YME), which showed a 3% reduction in total GHG emissions in relation to the control (1462.5 and 1505.5 kg CO₂-eq, respectively). However, the scenario with 1% YME showed an increase in emissions of 7% compared to the control group. In relation to CF, the treatment with 4% YME had the lowest value (4.1 kg CO₂-eq/kg LW) compared with the other groups. Nevertheless, although the 4% YME inclusion scenario showed the lowest CF, the gross margin decreased by 36% compared to the control group (0% YME), due to the cost of YME as a food additive. The results showed that the extract has the potential for use in reducing GHG. However, the cost of implementing this input as a mitigation strategy increased the production cost. Therefore, it is important to develop political strategies that help reduce the acquisition costs of input that contribute to the search for the environmental and economic benefit of the livestock sector.Keywords: meat production, natural additives, profitability, sheep
Procedia PDF Downloads 139507 Effects of a School-based Mindfulness Intervention on Stress Levels and Emotion Regulation of Adolescent Students Enrolled in an Independent School
Authors: Tracie Catlett
Abstract:
Students enrolled in high-achieving schools are under tremendous pressure to perform at high levels inside and outside the classroom. Achievement pressure is a prevalent source of stress for students enrolled in high-achieving schools, and female students, in particular, experience a higher frequency and higher levels of stress compared to their male peers. The practice of mindfulness in a school setting is one tool that has been linked to improved self-regulation of emotions, increased positive emotions, and stress reduction. A mixed methods randomized pretest-posttest no-treatment control trial evaluated the effects of a six-session mindfulness intervention taught during a regularly scheduled life skills period in an independent day school, one type of high-achieving school. Twenty-nine students in Grades 10 and 11 were randomized by class, where Grade 11 students were in the intervention group (n = 14) and Grade 10 students were in the control group (n = 15). Findings from the study produced mixed results. There was no evidence that the mindfulness program reduced participants’ stress levels and negative emotions. In fact, contrary to what was expected, students enrolled in the intervention group experienced higher levels of stress and increased negative emotions at posttreatment when compared to pretreatment. Neither the within-group nor the between-groups changes in stress level were statistically significant, p > .05, and the between-groups effect size was small, d = .2. The study found evidence that the mindfulness program may have had a positive impact on students’ ability to regulate their emotions. The within-group comparison and the between-groups comparison at posttreatment found that students in the mindfulness course experienced statistically significant improvement in the in their ability to regulate their emotions at posttreatment, p = .009 < .05 and p =. 034 < .05, respectively. The between-groups effect size was medium, d =.7, suggesting that the positive differences in emotion regulation difficulties were substantial and have practical implications. The analysis of gender differences, as they relate to stress and emotions, revealed that female students perceive higher levels of stress and report experiencing stress more often than males. There were no gender differences when analyzing sources of stress experienced by the student participants. Both females and males experience regular achievement pressures related to their school performance and worry about their future, college acceptance, grades, and parental expectations. Females reported an increased awareness of their stress and actively engaged in practicing mindfulness to manage their stress. Students in the treatment group expressed that the practice of mindfulness resulted in feelings of relaxation and calmness.Keywords: achievement pressure, adolescents, emotion regulation, emotions, high-achieving schools, independent schools, mindfulness, negative affect, positive affect, stress
Procedia PDF Downloads 61506 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 143505 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress
Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang
Abstract:
The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology
Procedia PDF Downloads 93504 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 131503 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator
Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić
Abstract:
Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.Keywords: CT simulator, radiotherapy, quality control, QA programme
Procedia PDF Downloads 534502 Enhancing of Antibacterial Activity of Essential Oil by Rotating Magnetic Field
Authors: Tomasz Borowski, Dawid Sołoducha, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy
Abstract:
Essential oils (EOs) are fragrant volatile oils obtained from plants. These are used for cooking (for flavor and aroma), cleaning, beauty (e.g., rosemary essential oil is used to promote hair growth), health (e.g. thyme essential oil cures arthritis, normalizes blood pressure, reduces stress on the heart, cures chest infection and cough) and in the food industry as preservatives and antioxidants. Rosemary and thyme essential oils are considered the most eminent herbs based on their history and medicinal properties. They possess a wide range of activity against different types of bacteria and fungi compared with the other oils in both in vitro and in vivo studies. However, traditional uses of EOs are limited due to rosemary and thyme oils in high concentrations can be toxic. In light of the accessible data, the following hypothesis was put forward: Low frequency rotating magnetic field (RMF) increases the antimicrobial potential of EOs. The aim of this work was to investigate the antimicrobial activity of commercial Salvia Rosmarinus L. and Thymus vulgaris L. essential oil from Polish company Avicenna-Oil under Rotating Magnetic Field (RMF) at f = 25 Hz. The self-constructed reactor (MAP) was applied for this study. The chemical composition of oils was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Minimum inhibitory concentrations (MIC) against E. coli were determined for the essential oils. Tested oils in very small concentrations were prepared (from 1 to 3 drops of essential oils per 3 mL working suspensions). From the results of disc diffusion assay and MIC tests, it can be concluded that thyme oil had the highest antibacterial activity against E. coli. Moreover, the study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of tested oils. The extended radiation exposure to RMF at the frequency f= 25 Hz beyond 160 minutes resulted in a significant increase in antibacterial potential against E. coli. Bacteria were killed within 40 minutes in thyme oil in lower tested concentration (1 drop of essential oils per 3 mL working suspension). Rapid decrease (>3 log) of bacteria number was observed with rosemary oil within 100 minutes (in concentration 3 drops of essential oils per 3 mL working suspension). Thus, a method for improving the antimicrobial performance of essential oil in low concentrations was developed. However, it still remains to be investigated how bacteria get killed by the EOs treated by an electromagnetic field. The possible mechanisms relies on alteration in the permeability of ionic channels in ionic channels in the bacterial cell walls that transport in the cells was proposed. For further studies, it is proposed to examine other types of essential oils and other antibiotic-resistant bacteria (ARB), which are causing a serious concern throughout the world.Keywords: rotating magnetic field, rosemary, thyme, essential oils, Escherichia coli
Procedia PDF Downloads 156501 Outcomes-Based Qualification Design and Vocational Subject Literacies: How Compositional Fallacy Short-Changes School-Leavers’ Literacy Development
Authors: Rose Veitch
Abstract:
Learning outcomes-based qualifications have been heralded as the means to raise vocational education and training (VET) standards, meet the needs of the changing workforce, and establish equivalence with existing academic qualifications. Characterized by explicit, measurable performance statements and atomistically specified assessment criteria, the outcomes model has been adopted by many VET systems worldwide since its inception in the United Kingdom in the 1980s. Debate to date centers on how the outcomes model treats knowledge. Flaws have been identified in terms of the overemphasis of end-points, neglect of process and a failure to treat curricula coherently. However, much of this censure has evaluated the outcomes model from a theoretical perspective; to date, there has been scant empirical research to support these criticisms. Various issues therefore remain unaddressed. This study investigates how the outcomes model impacts the teaching of subject literacies. This is of particular concern for subjects on the academic-vocational boundary such as Business Studies, since many of these students progress to higher education in the United Kingdom. This study also explores the extent to which the outcomes model is compatible with borderline vocational subjects. To fully understand if this qualification model is fit for purpose in the 16-18 year-old phase, it is necessary to investigate how teachers interpret their qualification specifications in terms of curriculum, pedagogy and assessment. Of particular concern is the nature of the interaction between the outcomes model and teachers’ understandings of their subject-procedural knowledge, and how this affects their capacity to embed literacy into their teaching. This present study is part of a broader doctoral research project which seeks to understand if and how content-area, disciplinary literacy and genre approaches can be adapted to outcomes-based VET qualifications. This qualitative research investigates the ‘what’ and ‘how’ of literacy embedding from the perspective of in-service teacher development in the 16-18 phase of education. Using ethnographic approaches, it is based on fieldwork carried out in one Further Education college in the United Kingdom. Emergent findings suggest that the outcomes model is not fit for purpose in the context of borderline vocational subjects. It is argued that the outcomes model produces inferior qualifications due to compositional fallacy; the sum of a subject’s components do not add up to the whole. Findings indicate that procedural knowledge, largely unspecified by some outcomes-based qualifications, is where subject-literacies are situated, and that this often gets lost in ‘delivery’. It seems that the outcomes model provokes an atomistic treatment of knowledge amongst teachers, along with the privileging of propositional knowledge over procedural knowledge. In other words, outcomes-based VET is a hostile environment for subject-literacy embedding. It is hoped that this research will produce useful suggestions for how this problem can be ameliorated, and will provide an empirical basis for the potential reforms required to address these issues in vocational education.Keywords: literacy, outcomes-based, qualification design, vocational education
Procedia PDF Downloads 14