Search results for: experiential versus material jobs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8193

Search results for: experiential versus material jobs

303 Chemical Modifications of Three Underutilized Vegetable Fibres for Improved Composite Value Addition and Dye Absorption Performance

Authors: Abayomi O. Adetuyi, Jamiu M. Jabar, Samuel O. Afolabi

Abstract:

Vegetable fibres are classes of fibres of low density, biodegradable and non-abrasive that are largely abundant fibre materials with specific properties and mostly found/ obtained in plants on earth surface. They are classified into three categories, depending on the part of the plant from which they are gotten from namely: fruit, Blast and Leaf fibre. Ever since four/five millennium B.C, attention has been focussing on the commonest and highly utilized cotton fibre obtained from the fruit of cotton plants (Gossypium spp), for the production of cotton fabric used in every home today. The present study, therefore, focused on the ability of three underutilized vegetable (fruit) fibres namely: coir fiber (Eleas coniferus), palm kernel fiber and empty fruit bunch fiber (Elias guinensis) through chemical modifications for better composite value addition performance to polyurethane form and dye adsorption. These fibres were sourced from their parents’ plants, identified and cleansed with 2% hot detergent solution 1:100, rinsed in distilled water and oven-dried to constant weight, before been chemically modified through alkali bleaching, mercerization and acetylation. The alkali bleaching involves treating 0.5g of each fiber material with 100 mL of 2% H2O2 in 25 % NaOH solution with refluxing for 2 h. While that of mercerization and acetylation involves the use of 5% sodium hydroxide NaOH solution for 2 h and 10% acetic acid- acetic anhydride 1:1 (v/v) (CH3COOH) / (CH3CO)2O solution with conc. H2SO4 as catalyst for 1 h, respectively on the fibres. All were subsequently washed thoroughly with distilled water and oven dried at 105 0C for 1 h. These modified fibres were incorporated as composite into polyurethane form and used in dye adsorption study of indigo. The first two treatments led to fiber weight reduction, while the acidified acetic anhydride treatment gave the fibers weight increment. All the treated fibers were found to be of less hydrophilic nature, better mechanical properties, higher thermal stabilities as well as better adsorption surfaces/capacities than the untreated ones. These were confirmed by gravimetric analysis, Instron Universal Testing Machine, Thermogravimetric Analyser and the Scanning Electron Microscope (SEM) respectively. The fiber morphology of the modified fibers showed smoother surfaces than unmodified fibres.The empty fruit bunch fibre and the coconut coir fibre are better than the palm kernel fibres as reinforcers for composites or as adsorbents for waste-water treatment. Acetylation and alkaline bleaching treatment improve the potentials of the fibres more than mercerization treatment. Conclusively, vegetable fibres, especially empty fruit bunch fibre and the coconut coir fibre, which are cheap, abundant and underutilized, can replace the very costly powdered activated carbon in wastewater treatment and as reinforcer in foam.

Keywords: chemical modification, industrial application, value addition, vegetable fibre

Procedia PDF Downloads 331
302 Influence of Temperature and Immersion on the Behavior of a Polymer Composite

Authors: Quentin C.P. Bourgogne, Vanessa Bouchart, Pierre Chevrier, Emmanuel Dattoli

Abstract:

This study presents an experimental and theoretical work conducted on a PolyPhenylene Sulfide reinforced with 40%wt of short glass fibers (PPS GF40) and its matrix. Thermoplastics are widely used in the automotive industry to lightweight automotive parts. The replacement of metallic parts by thermoplastics is reaching under-the-hood parts, near the engine. In this area, the parts are subjected to high temperatures and are immersed in cooling liquid. This liquid is composed of water and glycol and can affect the mechanical properties of the composite. The aim of this work was thus to quantify the evolution of mechanical properties of the thermoplastic composite, as a function of temperature and liquid aging effects, in order to develop a reliable design of parts. An experimental campaign in the tensile mode was carried out at different temperatures and for various glycol proportions in the cooling liquid, for monotonic and cyclic loadings on a neat and a reinforced PPS. The results of these tests allowed to highlight some of the main physical phenomena occurring during these solicitations under tough hydro-thermal conditions. Indeed, the performed tests showed that temperature and liquid cooling aging can affect the mechanical behavior of the material in several ways. The more the cooling liquid contains water, the more the mechanical behavior is affected. It was observed that PPS showed a higher sensitivity to absorption than to chemical aggressiveness of the cooling liquid, explaining this dominant sensitivity. Two kinds of behaviors were noted: an elasto-plastic type under the glass transition temperature and a visco-pseudo-plastic one above it. It was also shown that viscosity is the leading phenomenon above the glass transition temperature for the PPS and could also be important under this temperature, mostly under cyclic conditions and when the stress rate is low. Finally, it was observed that soliciting this composite at high temperatures is decreasing the advantages of the presence of fibers. A new phenomenological model was then built to take into account these experimental observations. This new model allowed the prediction of the evolution of mechanical properties as a function of the loading environment, with a reduced number of parameters compared to precedent studies. It was also shown that the presented approach enables the description and the prediction of the mechanical response with very good accuracy (2% of average error at worst), over a wide range of hydrothermal conditions. A temperature-humidity equivalence principle was underlined for the PPS, allowing the consideration of aging effects within the proposed model. Then, a limit of improvement of the reachable accuracy was determinate for all models using this set of data by the application of an artificial intelligence-based model allowing a comparison between artificial intelligence-based models and phenomenological based ones.

Keywords: aging, analytical modeling, mechanical testing, polymer matrix composites, sequential model, thermomechanical

Procedia PDF Downloads 116
301 Blister Formation Mechanisms in Hot Rolling

Authors: Rebecca Dewfall, Mark Coleman, Vladimir Basabe

Abstract:

Oxide scale growth is an inevitable byproduct of the high temperature processing of steel. Blister is a phenomenon that occurs due to oxide growth, where high temperatures result in the swelling of surface scale, producing a bubble-like feature. Blisters can subsequently become embedded in the steel substrate during hot rolling in the finishing mill. This rolled in scale defect causes havoc within industry, not only with wear on machinery but loss of customer satisfaction, poor surface finish, loss of material, and profit. Even though blister is a highly prevalent issue, there is still much that is not known or understood. The classic iron oxidation system is a complex multiphase system formed of wustite, magnetite, and hematite, producing multi-layered scales. Each phase will have independent properties such as thermal coefficients, growth rate, and mechanical properties, etc. Furthermore, each additional alloying element will have different affinities for oxygen and different mobilities in the oxide phases so that oxide morphologies are specific to alloy chemistry. Therefore, blister regimes can be unique to each steel grade resulting in a diverse range of formation mechanisms. Laboratory conditions were selected to simulate industrial hot rolling with temperature ranges approximate to the formation of secondary and tertiary scales in the finishing mills. Samples with composition: 0.15Wt% C, 0.1Wt% Si, 0.86Wt% Mn, 0.036Wt% Al, and 0.028Wt% Cr, were oxidised in a thermo-gravimetric analyser (TGA), with an air velocity of 10litresmin-1, at temperaturesof 800°C, 850°C, 900°C, 1000°C, 1100°C, and 1200°C respectively. Samples were held at temperature in an argon atmosphere for 10minutes, then oxidised in air for 600s, 60s, 30s, 15s, and 4s, respectively. Oxide morphology and Blisters were characterised using EBSD, WDX, nanoindentation, FIB, and FEG-SEM imaging. Blister was found to have both a nucleation and growth process. During nucleation, the scale detaches from the substrate and blisters after a very short period, roughly 10s. The steel substrate is then exposed inside of the blister and further oxidised in the reducing atmosphere of the blister, however, the atmosphere within the blister is highly dependent upon the porosity of the blister crown. The blister crown was found to be consistently between 35-40um for all heating regimes, which supports the theory that the blister inflates, and the oxide then subsequently grows underneath. Upon heating, two modes of blistering were identified. In Mode 1 it was ascertained that the stresses produced by oxide growth will increase with increasing oxide thickness. Therefore, in Mode 1 the incubation time for blister formation is shortened by increasing temperature. In Mode 2 increase in temperature will result in oxide with a high ductility and high oxide porosity. The high oxide ductility and/or porosity accommodates for the intrinsic stresses from oxide growth. Thus Mode 2 is the inverse of Mode 1, and incubation time is increased with temperature. A new phenomenon was reported whereby blister formed exclusively through cooling at elevated temperatures above mode 2.

Keywords: FEG-SEM, nucleation, oxide morphology, surface defect

Procedia PDF Downloads 144
300 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 323
299 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings

Authors: Nadine Maier, Martin Mensinger, Enea Tallushi

Abstract:

In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.

Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling

Procedia PDF Downloads 107
298 Liquid Waste Management in Cluster Development

Authors: Abheyjit Singh, Kulwant Singh

Abstract:

There is a gradual depletion of the water table in the earth's crust, and it is required to converse and reduce the scarcity of water. This is only done by rainwater harvesting, recycling of water and by judicially consumption/utilization of water and adopting unique treatment measures. Domestic waste is generated in residential areas, commercial settings, and institutions. Waste, in general, is unwanted, undesirable, and nevertheless an inevitable and inherent product of social, economic, and cultural life. In a cluster, a need-based system is formed where the project is designed for systematic analysis, collection of sewage from the cluster, treating it and then recycling it for multifarious work. The liquid waste may consist of Sanitary sewage/ Domestic waste, Industrial waste, Storm waste, or Mixed Waste. The sewage contains both suspended and dissolved particles, and the total amount of organic material is related to the strength of the sewage. The untreated domestic sanitary sewage has a BOD (Biochemical Oxygen Demand) of 200 mg/l. TSS (Total Suspended Solids) about 240 mg/l. Industrial Waste may have BOD and TSS values much higher than those of sanitary sewage. Another type of impurities of wastewater is plant nutrients, especially when there are compounds of nitrogen N phosphorus P in the sewage; raw sanitary contains approx. 35 mg/l Nitrogen and 10 mg/l of Phosphorus. Finally, the pathogen in the waste is expected to be proportional to the concentration of facial coliform bacteria. The coliform concentration in raw sanitary sewage is roughly 1 billion per liter. The system of sewage disposal technique has been universally applied to all conditions, which are the nature of soil formation, Availability of land, Quantity of Sewage to be disposed of, The degree of treatment and the relative cost of disposal technique. The adopted Thappar Model (India) has the following designed parameters consisting of a Screen Chamber, a Digestion Tank, a Skimming Tank, a Stabilization Tank, an Oxidation Pond and a Water Storage Pond. The screening Chamber is used to remove plastic and other solids, The Digestion Tank is designed as an anaerobic tank having a retention period of 8 hours, The Skimming Tank has an outlet that is kept 1 meter below the surface anaerobic condition at the bottom and also help in organic solid remover, Stabilization Tank is designed as primary settling tank, Oxidation Pond is a facultative pond having a depth of 1.5 meter, Storage Pond is designed as per the requirement. The cost of the Thappar model is Rs. 185 Lakh per 3,000 to 4,000 population, and the Area required is 1.5 Acre. The complete structure will linning as per the requirement. The annual maintenance will be Rs. 5 lakh per year. The project is useful for water conservation, silage water for irrigation, decrease of BOD and there will be no longer damage to community assets and economic loss to the farmer community by inundation. There will be a healthy and clean environment in the community.

Keywords: collection, treatment, utilization, economic

Procedia PDF Downloads 76
297 Cement Matrix Obtained with Recycled Aggregates and Micro/Nanosilica Admixtures

Authors: C. Mazilu, D. P. Georgescu, A. Apostu, R. Deju

Abstract:

Cement mortars and concretes are some of the most used construction materials in the world, global cement production being expected to grow to approx. 5 billion tons, until 2030. But, cement is an energy intensive material, the cement industry being responsible for cca. 7% of the world's CO2 emissions. Also, natural aggregates represent non-renewable resources, exhaustible, which must be used efficiently. A way to reduce the negative impact on the environment is the use of additional hydraulically active materials, as a partial substitute for cement in mortars and concretes and/or the use of recycled concrete aggregates (RCA) for the recovery of construction waste, according to EU Directive 2018/851. One of the most effective active hydraulic admixtures is microsilica and more recently, with the technological development on a nanometric scale, nanosilica. Studies carried out in recent years have shown that the introduction of SiO2 nanoparticles into cement matrix improves the properties, even compared to microsilica. This is due to the very small size of the nanosilica particles (<100nm) and the very large specific surface, which helps to accelerate cement hydration and acts as a nucleating agent to generate even more calcium hydrosilicate which densifies and compacts the structure. The cementitious compositions containing recycled concrete aggregates (RCA) present, in generally, inferior properties compared to those obtained with natural aggregates. Depending on the degree of replacement of natural aggregate, decreases the workability of mortars and concretes with RAC, decrease mechanical resistances and increase drying shrinkage; all being determined, in particular, by the presence to the old mortar attached to the original aggregate from the RAC, which makes its porosity high and the mixture of components to require more water for preparation. The present study aims to use micro and nanosilica for increase the performance of some mortars and concretes obtained with RCA. The research focused on two types of cementitious systems: a special mortar composition used for encapsulating Low Level radioactive Waste (LLW); a composition of structural concrete, class C30/37, with the combination of exposure classes XC4+XF1 and settlement class S4. The mortar was made with 100% recycled aggregate, 0-5 mm sort and in the case of concrete, 30% recycled aggregate was used for 4-8 and 8-16 sorts, according to EN 206, Annex E. The recycled aggregate was obtained from a specially made concrete for this study, which after 28 days was crushed with the help of a Retsch jaw crusher and further separated by sieving on granulometric sorters. The partial replacement of cement was done progressively, in the case of the mortar composition, with microsilica (3, 6, 9, 12, 15% wt.), nanosilica (0.75, 1.5, 2.25% wt.), respectively mixtures of micro and nanosilica. The optimal combination of silica, from the point of view of mechanical resistance, was later used also in the case of the concrete composition. For the chosen cementitious compositions, the influence of micro and/or nanosilica on the properties in the fresh state (workability, rheological characteristics) and hardened state (mechanical resistance, water absorption, freeze-thaw resistance, etc.) is highlighted.

Keywords: cement, recycled concrete aggregates, micro/nanosilica, durability

Procedia PDF Downloads 68
296 Characteristics of the Mortars Obtained by Radioactive Recycled Sand

Authors: Claudiu Mazilu, Ion Robu, Radu Deju

Abstract:

At the end of 2011 worldwide there were 124 power reactors shut down, from which: 16 fully decommissioned, 50 power reactors in a decommissioning process, 49 reactors in “safe enclosure mode”, 3 reactors “entombed”, for other 6 reactors it was not yet have specified the decommissioning strategy. The concrete radioactive waste that will be generated from dismantled structures of VVR-S nuclear research reactor from Magurele (e.g.: biological shield of the reactor core and hot cells) represents an estimated amount of about 70 tons. Until now the solid low activity radioactive waste (LLW) was pre-placed in containers and cementation with mortar made from cement and natural fine aggregates, providing a fill ratio of the container of approximately 50 vol. % for concrete. In this paper is presented an innovative technology in which radioactive concrete is crushed and the mortar made from recycled radioactive sand, cement, water and superplasticizer agent is poured in container with radioactive rubble (that is pre-placed in container) for cimentation. Is achieved a radioactive waste package in which the degree of filling of radioactive waste increases substantially. The tests were carried out on non-radioactive material because the radioactive concrete was not available in a good time. Waste concrete with maximum size of 350 mm were crushed in the first stage with a Liebhher type jaw crusher, adjusted to nominal size of 50 mm. Crushed concrete less than 50 mm was sieved in order to obtain useful sort for preplacement, 10 to 50 mm. The rest of the screening > 50 mm obtained from primary crushing of concrete was crushed in the second stage, with different working principles crushers at size < 2.5 mm, in order to produce recycled fine aggregate (sand) for the filler mortar and which fulfills the technical specifications proposed: –jaw crusher, Retsch type, model BB 100; –hammer crusher, Buffalo Shuttle model WA-12-H; presented a series of characteristics of recycled concrete aggregates by predefined class (the granulosity, the granule shape, the absorption of water, behavior to the Los Angeles test, the content of attached mortar etc.), most in comparison with characteristics of natural aggregates. Various mortar recipes were used in order to identify those that meet the proposed specification (flow-rate: 16-50s, no bleeding, min. 30N/mm2 compressive strength of the mortar after 28 days, the proportion of recycled sand used in mortar: min. 900kg/m3) and allow obtaining of the highest fill ratio for mortar. In order to optimize the mortars following compositional factors were varied: aggregate nature, water/cement (W/C) ratio, sand/cement (S/C) ratio, nature and proportion of additive. To confirm the results obtained on a small scale, it made an attempt to fill the mortar in a container that simulates the final storage drums. Was measured the mortar fill ratio (98.9%) compared with the results of laboratory tests and targets set out in the proposed specification. Although fill ratio obtained on the mock-up is lower by 0.8 vol. % compared to that obtained in the laboratory tests (99.7%), the result meets the specification criteria.

Keywords: characteristics, radioactive recycled concrete aggregate, mortars, fill ratio

Procedia PDF Downloads 194
295 Impact of Blended Learning in Interior Architecture Programs in Academia: A Case Study of Arcora Garage Academy from Turkey

Authors: Arzu Firlarer, Duygu Gocmen, Gokhan Uysal

Abstract:

There is currently a growing trend among universities towards blended learning. Blended learning is becoming increasingly important in higher education, with the aims of better accomplishing course learning objectives, meeting students’ changing needs and promoting effective learning both in a theoretical and practical dimension like interior architecture discipline. However, the practical dimension of the discipline cannot be supported in the university environment. During the undergraduate program, the practical training which is tried to be supported by two different internship programs cannot fully meet the requirements of the blended learning. The lack of education program frequently expressed by our graduates and employers is revealed in the practical knowledge and skills dimension of the profession. After a series of meetings for curriculum studies, interviews with the chambers of profession, meetings with interior architects, a gap between the theoretical and practical training modules is seen as a problem in all interior architecture departments. It is thought that this gap can be solved by a new education model which is formed by the cooperation of University-Industry in the concept of blended learning. In this context, it is considered that theoretical and applied knowledge accumulation can be provided by the creation of industry-supported educational environments at the university. In the application process of the Interior Architecture discipline, the use of materials and technical competence will only be possible with the cooperation of industry and participation of students in the production/manufacture processes as observers and practitioners. Wood manufacturing is an important part of interior architecture applications. Wood productions is a sustainable structural process where production details, material knowledge, and process details can be observed in the most effective way. From this point of view, after theoretical training about wooden materials, wood applications and production processes are given to the students, practical training for production/manufacture planning is supported by active participation and observation in the processes. With this blended model, we aimed to develop a training model in which theoretical and practical knowledge related to the production of wood works will be conveyed in a meaningful, lasting way by means of university-industry cooperation. The project is carried out in Ankara with Arcora Architecture and Furniture Company and Başkent University Department of Interior Design where university-industry cooperation is realized. Within the scope of the project, every week the video of that week’s lecture is recorded and prepared to be disseminated by digital medias such as Udemy. In this sense, the program is not only developed by the project participants, but also other institutions and people who are trained and practiced in the field of design. Both academicians from University and at least 15-year experienced craftsmen in the wood metal and dye sectors are preparing new training reference documents for interior architecture undergraduate programs. These reference documents will be a model for other Interior Architecture departments of the universities and will be used for creating an online education module.

Keywords: blended learning, interior design, sustainable training, effective learning.

Procedia PDF Downloads 136
294 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis

Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos

Abstract:

Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.

Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis

Procedia PDF Downloads 140
293 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment

Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen

Abstract:

Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.

Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time

Procedia PDF Downloads 77
292 Tracking Patient Pathway for Assessing Public Health and Financial Burden to Community for Pulmonary Tuberculosis: Pointer from Central India

Authors: Ashish Sinha, Pushpend Agrawal

Abstract:

Background: Patients with undiagnosed pulmonary TB predominantly act as reservoirs for its transmission through 10-15 secondary infections in the next 1-5 Yrs. Delays in the diagnosis and treatment may worsen the disease with increase the risk of death. Factors responsible for such delays by tracking patient pathways to treatment may help in planning better interventions. The provision of ‘free diagnosis and treatment’ forms the cornerstone of the National Tuberculosis Elimination Programme (NTEP). OOPE is defined as the money spent by the patient during TB care other than public health facilities. Free TB care at all health facilities could reduce out-of-pocket expenses to the minimum possible levels. Material and Methods: This cross-sectional study was conducted among randomly selected 252 TB patients from Nov – Oct 2022 by taking in-depth interviews following informed verbal consent. We documented their journey from initial symptoms until they reached the public health facility, along with their ‘out-of-pocket expenditure’ (OOPE) pertaining to TB care. Results: Total treatment delay was 91±72 days on average (median: 77days, IQR: 45-104 days), while the isolated patient delay was 31±45 days (median: 15 days, IQR: 0 days to 43 days); diagnostic delay; 57±60 days (median: 42days, IQR 14-78 days), treatment delay 19 ± 18 days (median: 15days, IQR: 11-19 days). A patient delay (> 30 days) was significantly associated with ignorance about classic symptoms of pulmonary TB, adoption of self-medication, illiteracy, and middle and lower social class. Diagnostic delay was significantly higher among those who contacted private health facilities, were unaware of signs and symptoms, had >2 consultations, and not getting an appropriate referral for TB care. Most (97%) of the study participants interviewed claimed to have incurred some expenditure.Median total expenses were 6155(IQR: 2625-15175) rupees. More than half 141 (56%) of the study participants had expenses >5000 rupees. Median transport expenses were 525(IQR: 200-1012) rupees; Median consultation expenses were 700(IQR: 200-1600) rupees; Median investigation expenses were 1000(IQR: 0-3025) rupees and the Median medicine expenses were 3350(IQR: 1300-7525).OOPE for consultation, investigation, and medicine was observed to be significantly higher among patients who ignored classical signs& symptoms of TB, repeated visits to private health facilities, and due to self-medication practices. Transport expenses and delays in seeking care at facilities were observed to have an upward trend with OOP Expenses (r =1). Conclusion: Delay in TB care due to low awareness about signs and symptoms of TB and poor seeking care, lack of proper consultation, and appropriate referrals reported by the study subjects indicate the areas which need proper attention by the program managers. Despite a centrally sponsored programme, the financial burden on TB patients is still in the unacceptable range. OOPE could be reduced as low as possible by addressing the responsible factors linked to it.

Keywords: patient pathway, delay, pulmonary tuberculosis, out of pocket expenses

Procedia PDF Downloads 66
291 Digital Subsistence of Cultural Heritage: Digital Media as a New Dimension of Cultural Ecology

Authors: Dan Luo

Abstract:

With the climate change can exacerbate exposure of cultural heritage to climatic stressors, scholars pin their hope on digital technology can help the site avoid surprises. Virtual museum has been regarded as a highly effective technology that enables people to gain enjoyable visiting experience and immersive information about cultural heritage. The technology clearly reproduces the images of the tangible cultural heritage, and the aesthetic experience created by new media helps consumers escape from the realistic environment full of uncertainty. The new cultural anchor has appeared outside the cultural sites. This article synthesizes the international literature on the virtual museum by developing diagrams of Citespace focusing on the tangible cultural heritage and the alarmingly situation has emerged in the process of resolving climate change: (1) Digital collections are the different cultural assets for public. (2) The media ecology change people ways of thinking and meeting style of cultural heritage. (3) Cultural heritage may live forever in the digital world. This article provides a typical practice information to manage cultural heritage in a changing climate—the Dunhuang Mogao Grottoes in the far northwest of China, which is a worldwide cultural heritage site famous for its remarkable and sumptuous murals. This monument is a typical synthesis of art containing 735 Buddhist temples, which was listed by UNESCO as one of the World Cultural Heritage sites. The caves contain some extraordinary examples of Buddhist art spanning a period of 1,000 years - the architectural form, the sculptures in the caves, and the murals on the walls, all together constitute a wonderful aesthetic experience. Unfortunately, this magnificent treasure cave has been threatened by increasingly frequent dust storms and precipitation. The Dunhuang Academy has been using digital technology since the last century to preserve these immovable cultural heritages, especially the murals in the caves. And then, Dunhuang culture has become a new media culture after introduce the art to the world audience through exhibitions, VR, video, etc. The paper chooses qualitative research method that used Nvivo software to encode the collected material to answer this question. The author paid close attention to the survey in Dunhuang City, including participated in 10 exhibition and 20 salons that are Dunhuang-themed on network. What’s more, 308 visitors were interviewed who are fans of the art and have experienced Dunhuang culture online(6-75 years).These interviewees have been exposed to Dunhuang culture through different media, and they are acutely aware of the threat to this cultural heritage. The conclusion is that the unique halo of the cultural heritage was always emphasized, and digital media breeds twin brothers of cultural heritage. In addition, the digital media make it possible for cultural heritage to reintegrate into the daily life of the masses. Visitors gain the opportunity to imitate the mural figures through enlarged or emphasized images but also lose the perspective of understanding the whole cultural life. New media construct a new life aesthetics apart from the Authorized heritage discourse.

Keywords: cultural ecology, digital twins, life aesthetics, media

Procedia PDF Downloads 81
290 Urban Stratification as a Basis for Analyzing Political Instability: Evidence from Syrian Cities

Authors: Munqeth Othman Agha

Abstract:

The historical formation of urban centres in the eastern Arab world was shaped by rapid urbanization and sudden transformation from the age of the pre-industrial to a post-industrial economy, coupled with uneven development, informal urban expansion, and constant surges in unemployment and poverty rates. The city was stratified accordingly as overlapping layers of division and inequality that have been built on top of each other, creating complex horizontal and vertical divisions based on economic, social, political, and ethno-sectarian basis. This has been further exacerbated during the neoliberal era, which transferred the city into a sort of dual city that is inhabited by heterogeneous and often antagonistic social groups. Economic deprivation combined with a growing sense of marginalization and inequality across the city planted the seeds of political instability, outbreaking in 2011. Unlike other popular uprisings that occupy central squares, as in Egypt and Tunisia, the Syrian uprising in 2011 took place mainly within inner streets and neighborhood squares, mobilizing primarily on more or less upon the lines of stratification. This has emphasized the role of micro-urban and social settings in shaping mobilization and resistance tactics, which necessitates us to understand the way the city was stratified and place it at the center of the city-conflict nexus analysis. This research aims to understand to what extent pre-conflict urban stratification lines played a role in determining the different trajectories of three cities’ neighborhoods (Homs, Dara’a and Deir-ez-Zor). The main argument of the paper is that the way the Syrian city has been stratified creates various social groups within the city who have enjoyed different levels of accessibility to life chances, material resources and social statuses. This determines their relationship with other social groups in the city and, more importantly, their relationship with the state. The advent of a political opportunity will be depicted differently across the city’s different social groups according to their perceived interests and threats, which consequently leads to either political mobilization or demobilization. Several factors, including the type of social structures, built environment, and state response, determine the ability of social actors to transfer the repertoire of contention to collective action or transfer from social actors to political actors. The research uses urban stratification lines as the basis for understanding the different patterns of political upheavals in urban areas while explaining why neighborhoods with different social and urban environment settings had different abilities and capacities to mobilize, resist state repression and then descend into a military conflict. It particularly traces the transformation from social groups to social actors and political actors by applying the Explaining-outcome Process-Tracing method to depict the causal mechanisms that led to including or excluding different neighborhoods from each stage of the uprising, namely mobilization (M1), response (M2), and control (M3).

Keywords: urban stratification, syrian conflict, social movement, process tracing, divided city

Procedia PDF Downloads 73
289 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes

Authors: Iris Vural Gursel, Andrea Ramirez

Abstract:

Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.

Keywords: biorefinery, economic assessment, lignin conversion, process design

Procedia PDF Downloads 261
288 A Design Methodology and Tool to Support Ecodesign Implementation in Induction Hobs

Authors: Anna Costanza Russo, Daniele Landi, Michele Germani

Abstract:

Nowadays, the European Ecodesign Directive has emerged as a new approach to integrate environmental concerns into the product design and related processes. Ecodesign aims to minimize environmental impacts throughout the product life cycle, without compromising performances and costs. In addition, the recent Ecodesign Directives require products which are increasingly eco-friendly and eco-efficient, preserving high-performances. It is very important for producers measuring performances, for electric cooking ranges, hobs, ovens, and grills for household use, and a low power consumption of appliances represents a powerful selling point, also in terms of ecodesign requirements. The Ecodesign Directive provides a clear framework about the sustainable design of products and it has been extended in 2009 to all energy-related products, or products with an impact on energy consumption during the use. The European Regulation establishes measures of ecodesign of ovens, hobs, and kitchen hoods, and domestic use and energy efficiency of a product has a significant environmental aspect in the use phase which is the most impactful in the life cycle. It is important that the product parameters and performances are not affected by ecodesign requirements from a user’s point of view, and the benefits of reducing energy consumption in the use phase should offset the possible environmental impact in the production stage. Accurate measurements of cooking appliance performance are essential to help the industry to produce more energy efficient appliances. The development of ecodriven products requires ecoinnovation and ecodesign tools to support the sustainability improvement. The ecodesign tools should be practical and focused on specific ecoobjectives in order to be largely diffused. The main scope of this paper is the development, implementation, and testing of an innovative tool, which could be an improvement for the sustainable design of induction hobs. In particular, a prototypical software tool is developed in order to simulate the energy performances of the induction hobs. The tool is focused on a multiphysics model which is able to simulate the energy performances and the efficiency of induction hobs starting from the design data. The multiphysics model is composed by an electromagnetic simulation and a thermal simulation. The electromagnetic simulation is able to calculate the eddy current induced in the pot, which leads to the Joule heating of material. The thermal simulation is able to measure the energy consumption during the operational phase. The Joule heating caused from the eddy currents is the output of electromagnetic simulation and the input of thermal ones. The aims of the paper are the development of integrated tools and methodologies of virtual prototyping in the context of the ecodesign. This tool could be a revolutionary instrument in the field of industrial engineering and it gives consideration to the environmental aspects of product design and focus on the ecodesign of energy-related products, in order to achieve a reduced environmental impact.

Keywords: ecodesign, energy efficiency, induction hobs, virtual prototyping

Procedia PDF Downloads 251
287 Enhancing Archaeological Sites: Interconnecting Physically and Digitally

Authors: Eleni Maistrou, D. Kosmopoulos, Carolina Moretti, Amalia Konidi, Katerina Boulougoura

Abstract:

InterArch is an ongoing research project that has been running since September 2020. It aims to propose the design of a site-based digital application for archaeological sites and outdoor guided tours, supporting virtual and augmented reality technology. The research project is co‐financed by the European Union and Greek national funds, through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE – INNOVATE (project code: Τ2ΕΔΚ-01659). It involves mutual collaboration between academic and cultural institutions and the contribution of an IT applications development company. The research will be completed by July 2023 and will run as a pilot project for the city of Ancient Messene, a place of outstanding natural beauty in the west of Peloponnese, which is considered one of the most important archaeological sites in Greece. The applied research project integrates an interactive approach to the natural environment, aiming at a manifold sensory experience. It combines the physical space of the archaeological site with the digital space of archaeological and cultural data while at the same time, it embraces storytelling processes by engaging an interdisciplinary approach that familiarizes the user with multiple semantic interpretations. The mingling of the real-world environment with its digital and cultural components by using augmented reality techniques could potentially transform the visit on-site into an immersive multimodal sensory experience. To this purpose, an extensive spatial analysis along with a detailed evaluation of the existing digital and non-digital archives is proposed in our project, intending to correlate natural landscape morphology (including archaeological material remains and environmental characteristics) with the extensive historical records and cultural digital data. On-site research was carried out, during which visitors’ itineraries were monitored and tracked throughout the archaeological visit using GPS locators. The results provide our project with useful insight concerning the way visitors engage and interact with their surroundings, depending on the sequence of their itineraries and the duration of stay at each location. InterArch aims to propose the design of a site-based digital application for archaeological sites and outdoor guided tours, supporting virtual and augmented reality technology. Extensive spatial analysis, along with a detailed evaluation of the existing digital and non-digital archives, is used in our project, intending to correlate natural landscape morphology with the extensive historical records and cultural digital data. The results of the on-site research provide our project with useful insight concerning the way visitors engage and interact with their surroundings, depending on the sequence of their itineraries and the duration of stay at each location.

Keywords: archaeological site, digital space, semantic interpretations, cultural heritage

Procedia PDF Downloads 70
286 Influence of Surface Fault Rupture on Dynamic Behavior of Cantilever Retaining Wall: A Numerical Study

Authors: Partha Sarathi Nayek, Abhiparna Dasgupta, Maheshreddy Gade

Abstract:

Earth retaining structure plays a vital role in stabilizing unstable road cuts and slopes in the mountainous region. The retaining structures located in seismically active regions like the Himalayas may experience moderate to severe earthquakes. An earthquake produces two kinds of ground motion: permanent quasi-static displacement (fault rapture) on the fault rupture plane and transient vibration, traveling a long distance. There has been extensive research work to understand the dynamic behavior of retaining structures subjected to transient ground motions. However, understanding the effect caused by fault rapture phenomena on retaining structures is limited. The presence of shallow crustal active faults and natural slopes in the Himalayan region further highlights the need to study the response of retaining structures subjected to fault rupture phenomena. In this paper, an attempt has been made to understand the dynamic response of the cantilever retaining wall subjected to surface fault rupture. For this purpose, a 2D finite element model consists of a retaining wall, backfill and foundation have been developed using Abaqus 6.14 software. The backfill and foundation material are modeled as per the Mohr-Coulomb failure criterion, and the wall is modeled as linear elastic. In this present study, the interaction between backfill and wall is modeled as ‘surface-surface contact.’ The entire simulation process is divided into three steps, i.e., the initial step, gravity load step, fault rupture step. The interaction property between wall and soil and fixed boundary condition to all the boundary elements are applied in the initial step. In the next step, gravity load is applied, and the boundary elements are allowed to move in the vertical direction to incorporate the settlement of soil due to the gravity load. In the final step, surface fault rupture has been applied to the wall-backfill system. For this purpose, the foundation is divided into two blocks, namely, the hanging wall block and the footwall block. A finite fault rupture displacement is applied to the hanging wall part while the footwall bottom boundary is kept as fixed. Initially, a numerical analysis is performed considering the reverse fault mechanism with a dip angle of 45°. The simulated result is presented in terms of contour maps of permanent displacements of the wall-backfill system. These maps highlighted that surface fault rupture can induce permanent displacement in both horizontal and vertical directions, which can significantly influence the dynamic behavior of the wall-backfill system. Further, the influence of fault mechanism, dip angle, and surface fault rupture position is also investigated in this work.

Keywords: surface fault rupture, retaining wall, dynamic response, finite element analysis

Procedia PDF Downloads 106
285 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing

Procedia PDF Downloads 94
284 Embracing Diverse Learners: A Way Towards Effective Learning

Authors: Mona Kamel Hassan

Abstract:

Teaching a class of diverse learners poses a great challenge not only for foreign and second language teachers, but also for teachers in different disciplines as well as for curriculum designers. Thus, to contribute to previous research tackling language diversity, the current paper shares the experience of teaching a reading, writing and vocabulary building course to diverse Arabic as a Foreign Language learners in their advanced language proficiency level. Diversity is represented in students’ motivation, their prior knowledge, their various needs and interests, their level of anxiety, and their different learning styles and skills. While teaching this course the researcher adopted the universal design for learning (UDL) framework, which is a means to meet the various needs of diverse learners. UDL stresses the importance of enabling the entire diverse students to gain skills, knowledge, and enthusiasm to learn through the employment of teaching methods that respond to students' individual differences. Accordingly, the educational curriculum developed for this course and the teaching methods employed is modified. First, the researcher made the language curriculum vivid and attractive to inspire students' learning and to keep them engaged in their learning process. The researcher encouraged the entire students, from the first day, to suggest topics of their interest; political, social, cultural, etc. The authentic Arabic texts chosen are those that best meet students’ needs, interests, lives, and sociolinguistic issues, together with the linguistic and cultural components. In class and under the researcher’s guidance, students dig into these topics to find solutions for the tackled issues while working with their peers. Second, to gain equal opportunities to demonstrate learning, role-playing was encouraged to give students the opportunity to perform different linguistic tasks, to reflect and share their diverse interests and cultural backgrounds with their peers. Third, to bring the UDL into the classroom, students were encouraged to work on interactive, collaborative activities through technology to improve their reading and writing skills and reinforce their mastery of the accumulated vocabulary, idiomatic expressions, and collocations. These interactive, collaborative activities help to facilitate student-student communication and student-teacher communication and to increase comfort in this class of diverse learners. Detailed samples of the educational curriculum and interactive, collaborative activities developed, accompanied by methods of teaching employed to teach these diverse learners, are presented for illustration. Results revealed that students are responsive to the educational materials which are developed for this course. Therefore, they engaged in the learning process and classroom activities and discussions effectively. They also appreciated their instructor’s willingness to differentiate the teaching methods to suit students of diverse background knowledge, learning styles, level of anxiety, etc. Finally, the researcher believes that sharing this experience in teaching diverse learners will help both language teachers and teachers in other disciplines to develop a better understanding to meet their students' diverse needs. Results will also pave the way for curriculum designers to develop educational material that meets the needs of diverse learners.

Keywords: teaching, language, diverse, learners

Procedia PDF Downloads 99
283 Parametric Study for Obtaining the Structural Response of Segmental Tunnels in Soft Soil by Using No-Linear Numerical Models

Authors: Arturo Galván, Jatziri Y. Moreno-Martínez, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado

Abstract:

In recent years, one of the methods most used for the construction of tunnels in soft soil is the shield-driven tunneling. The advantage of this construction technique is that it allows excavating the tunnel while at the same time a primary lining is placed, which consists of precast segments. There are joints between segments, also called longitudinal joints, and joints between rings (called as circumferential joints). This is the reason because of this type of constructions cannot be considered as a continuous structure. The effect of these joints influences in the rigidity of the segmental lining and therefore in its structural response. A parametric study was performed to take into account the effect of different parameters in the structural response of typical segmental tunnels built in soft soil by using non-linear numerical models based on Finite Element Method by means of the software package ANSYS v. 11.0. In the first part of this study, two types of numerical models were performed. In the first one, the segments were modeled by using beam elements based on Timoshenko beam theory whilst the segment joints were modeled by using inelastic rotational springs considering the constitutive moment-rotation relation proposed by Gladwell. In this way, the mechanical behavior of longitudinal joints was simulated. On the other hand for simulating the mechanical behavior of circumferential joints elastic springs were considered. As well as, the stability given by the soil was modeled by means of elastic-linear springs. In the second type of models, the segments were modeled by means of three-dimensional solid elements and the joints with contact elements. In these models, the zone of the joints is modeled as a discontinuous (increasing the computational effort) therefore a discrete model is obtained. With these contact elements the mechanical behavior of joints is simulated considering that when the joint is closed, there is transmission of compressive and shear stresses but not of tensile stresses and when the joint is opened, there is no transmission of stresses. This type of models can detect changes in the geometry because of the relative movement of the elements that form the joints. A comparison between the numerical results with two types of models was carried out. In this way, the hypothesis considered in the simplified models were validated. In addition, the numerical models were calibrated with (Lab-based) experimental results obtained from the literature of a typical tunnel built in Europe. In the second part of this work, a parametric study was performed by using the simplified models due to less used computational effort compared to complex models. In the parametric study, the effect of material properties, the geometry of the tunnel, the arrangement of the longitudinal joints and the coupling of the rings were studied. Finally, it was concluded that the mechanical behavior of segment and ring joints and the arrangement of the segment joints affect the global behavior of the lining. As well as, the effect of the coupling between rings modifies the structural capacity of the lining.

Keywords: numerical models, parametric study, segmental tunnels, structural response

Procedia PDF Downloads 229
282 Aligning Informatics Study Programs with Occupational and Qualifications Standards

Authors: Patrizia Poscic, Sanja Candrlic, Danijela Jaksic

Abstract:

The University of Rijeka, Department of Informatics participated in the Stand4Info project, co-financed by the European Union, with the main idea of an alignment of study programs with occupational and qualifications standards in the field of Informatics. A brief overview of our research methodology, goals and deliverables is shown. Our main research and project objectives were: a) development of occupational standards, qualification standards and study programs based on the Croatian Qualifications Framework (CROQF), b) higher education quality improvement in the field of information and communication sciences, c) increasing the employability of students of information and communication technology (ICT) and science, and d) continuously improving competencies of teachers in accordance with the principles of CROQF. CROQF is a reform instrument in the Republic of Croatia for regulating the system of qualifications at all levels through qualifications standards based on learning outcomes and following the needs of the labor market, individuals and society. The central elements of CROQF are learning outcomes - competences acquired by the individual through the learning process and proved afterward. The place of each acquired qualification is set by the level of the learning outcomes belonging to that qualification. The placement of qualifications at respective levels allows the comparison and linking of different qualifications, as well as linking of Croatian qualifications' levels to the levels of the European Qualifications Framework and the levels of the Qualifications framework of the European Higher Education Area. This research has made 3 proposals of occupational standards for undergraduate study level (System Analyst, Developer, ICT Operations Manager), and 2 for graduate (master) level (System Architect, Business Architect). For each occupational standard employers have provided a list of key tasks and associated competencies necessary to perform them. A set of competencies required for each particular job in the workplace was defined and each set of competencies as described in more details by its individual competencies. Based on sets of competencies from occupational standards, sets of learning outcomes were defined and competencies from the occupational standard were linked with learning outcomes. For each learning outcome, as well as for the set of learning outcomes, it was necessary to specify verification method, material, and human resources. The task of the project was to suggest revision and improvement of the existing study programs. It was necessary to analyze existing programs and determine how they meet and fulfill defined learning outcomes. This way, one could see: a) which learning outcomes from the qualifications standards are covered by existing courses, b) which learning outcomes have yet to be covered, c) are they covered by mandatory or elective courses, and d) are some courses unnecessary or redundant. Overall, the main research results are: a) completed proposals of qualification and occupational standards in the field of ICT, b) revised curricula of undergraduate and master study programs in ICT, c) sustainable partnership and association stakeholders network, d) knowledge network - informing the public and stakeholders (teachers, students, and employers) about the importance of CROQF establishment, and e) teachers educated in innovative methods of teaching.

Keywords: study program, qualification standard, occupational standard, higher education, informatics and computer science

Procedia PDF Downloads 143
281 Use of Analytic Hierarchy Process for Plant Site Selection

Authors: Muzaffar Shaikh, Shoaib Shaikh, Mark Moyou, Gaby Hawat

Abstract:

This paper presents the use of Analytic Hierarchy Process (AHP) in evaluating the site selection of a new plant by a corporation. Due to intense competition at a global level, multinational corporations are continuously striving to minimize production and shipping costs of their products. One key factor that plays significant role in cost minimization is where the production plant is located. In the U.S. for example, labor and land costs continue to be very high while they are much cheaper in countries such as India, China, Indonesia, etc. This is why many multinational U.S. corporations (e.g. General Electric, Caterpillar Inc., Ford, General Motors, etc.), have shifted their manufacturing plants outside. The continued expansion of the Internet and its availability along with technological advances in computer hardware and software all around the globe have facilitated U.S. corporations to expand abroad as they seek to reduce production cost. In particular, management of multinational corporations is constantly engaged in concentrating on countries at a broad level, or cities within specific countries where certain or all parts of their end products or the end products themselves can be manufactured cheaper than in the U.S. AHP is based on preference ratings of a specific decision maker who can be the Chief Operating Officer of a company or his/her designated data analytics engineer. It serves as a tool to first evaluate the plant site selection criteria and second, alternate plant sites themselves against these criteria in a systematic manner. Examples of site selection criteria are: Transportation Modes, Taxes, Energy Modes, Labor Force Availability, Labor Rates, Raw Material Availability, Political Stability, Land Costs, etc. As a necessary first step under AHP, evaluation criteria and alternate plant site countries are identified. Depending upon the fidelity of analysis, specific cities within a country can also be chosen as alternative facility locations. AHP experience in this type of analysis indicates that the initial analysis can be performed at the Country-level. Once a specific country is chosen via AHP, secondary analyses can be performed by selecting specific cities or counties within a country. AHP analysis is usually based on preferred ratings of a decision-maker (e.g., 1 to 5, 1 to 7, or 1 to 9, etc., where 1 means least preferred and a 5 means most preferred). The decision-maker assigns preferred ratings first, criterion vs. criterion and creates a Criteria Matrix. Next, he/she assigns preference ratings by alternative vs. alternative against each criterion. Once this data is collected, AHP is applied to first get the rank-ordering of criteria. Next, rank-ordering of alternatives is done against each criterion resulting in an Alternative Matrix. Finally, overall rank ordering of alternative facility locations is obtained by matrix multiplication of Alternative Matrix and Criteria Matrix. The most practical aspect of AHP is the ‘what if’ analysis that the decision-maker can conduct after the initial results to provide valuable sensitivity information of specific criteria to other criteria and alternatives.

Keywords: analytic hierarchy process, multinational corporations, plant site selection, preference ratings

Procedia PDF Downloads 288
280 An Evaluation of a First Year Introductory Statistics Course at a University in Jamaica

Authors: Ayesha M. Facey

Abstract:

The evaluation sought to determine the factors associated with the high failure rate among students taking a first-year introductory statistics course. By utilizing Tyler’s Objective Based Model, the main objectives were: to assess the effectiveness of the lecturer’s teaching strategies; to determine the proportion of students who attends lectures and tutorials frequently and to determine the impact of infrequent attendance on performance; to determine how the assigned activities assisted in students understanding of the course content; to ascertain the possible issues being faced by students in understanding the course material and obtain possible solutions to the challenges and to determine whether the learning outcomes have been achieved based on an assessment of the second in-course examination. A quantitative survey research strategy was employed and the study population was students enrolled in semester one of the academic year 2015/2016. A convenience sampling approach was employed resulting in a sample of 98 students. Primary data was collected using self-administered questionnaires over a one-week period. Secondary data was obtained from the results of the second in-course examination. Data were entered and analyzed in SPSS version 22 and both univariate and bivariate analyses were conducted on the information obtained from the questionnaires. Univariate analyses provided description of the sample through means, standard deviations and percentages while bivariate analyses were done using Spearman’s Rho correlation coefficient and Chi-square analyses. For secondary data, an item analysis was performed to obtain the reliability of the examination questions, difficulty index and discriminant index. The examination results also provided information on the weak areas of the students and highlighted the learning outcomes that were not achieved. Findings revealed that students were more likely to participate in lectures than tutorials and that attendance was high for both lectures and tutorials. There was a significant relationship between participation in lectures and performance on examination. However, a high proportion of students has been absent from three or more tutorials as well as lectures. A higher proportion of students indicated that they completed the assignments obtained from the lectures sometimes while they rarely completed tutorial worksheets. Students who were more likely to complete their assignments were significantly more likely to perform well on their examination. Additionally, students faced a number of challenges in understanding the course content and the topics of probability, binomial distribution and normal distribution were the most challenging. The item analysis also highlighted these topics as problem areas. Problems doing mathematics and application and analyses were their major challenges faced by students and most students indicated that some of the challenges could be alleviated if additional examples were worked in lectures and they were given more time to solve questions. Analysis of the examination results showed that a number of learning outcomes were not achieved for a number of topics. Based on the findings recommendations were made that suggested adjustments to grade allocations, delivery of lectures and methods of assessment.

Keywords: evaluation, item analysis, Tyler’s objective based model, university statistics

Procedia PDF Downloads 190
279 Prevention and Treatment of Hay Fever Prevalence by Natural Products: A Phytochemistry Study on India and Iran

Authors: Tina Naser Torabi

Abstract:

Prevalence of allergy is affected by different factors according to its base and seasonal weather changes, and it also needs various treatments.Although reasons of allergy existence are not clear but generally, allergens cause reaction between antigen and antibody because of their antigenic traits. In this state, allergens cause immune system to make mistake and identify safe material as threat, therefore function of immune system impaired because of histamine secretion. There are different reasons for allergy, but herbal reasons are on top of the list, although animal causes cannot be ignored. Important point is that allergenic compounds, cause making dedicated antibody, so in general every kind of allergy is different from the other one. Therefore, most of the plants in herbal allergenic category can cause various allergies for human beings, such as respiratory allergies, nutritional allergies, injection allergies, infection allergies, touch allergies, that each of them show different symptoms based on the reason of allergy and also each of them requires different prevention and treatment. Geographical condition is another effective factor in allergy. Seasonal changes, weather condition, herbal coverage variety play important roles in different allergies. It goes without saying that humid climate and herbal coverage variety in different seasons especially spring cause most allergies in human beings in Iran and India that are discussed in this article. These two countries are good choices for allergy prevalence because of their condition, various herbal coverage, human and animal factors. Hay fever is one of the allergies, although the reasons of its prevalence are unknown yet. It is one of the most popular allergies in Iran and India because of geographical, human, animal and herbal factors. Hay fever is on top of the list in these two countries. Significant point about these two countries is that herbal factor is the most important factor in prevalence of hay fever. Variety of herbal coverage especially in spring during herbal pollination is the main reason of hay fever prevalence in these two countries. Based on the research result of Pharmacognosy and Phytochemistry, pollination of some plants in spring is major reason of hay fever prevalence in these countries. If airborne pollens in pollination season enter the human body through air, they will cause allergic reactions in eyes, nasal mucosa, lungs, and respiratory system, and if these particles enter the body of potential person through food, they will cause allergic reactions in mouth, stomach, and other digestive systems. Occasionally, chemical materials produced by human body such as Histamine cause problems like: developing of nasal polyps, nasal blockage, sleep disturbance, risk of asthma developing, blood vasodilation, sneezing, eye tears, itching and swelling of eyes and nasal mucosa, Urticaria, decrease in blood pressure, and rarely trauma, anesthesia, anaphylaxis and finally death. This article is going to study the reasons of hay fever prevalence in Iran and India and presents prevention and treatment Method from Phytochemistry and Pharmocognocy point of view by using local natural products in these two countries.

Keywords: hay fever, India, Iran, natural treatment, phytochemistry

Procedia PDF Downloads 166
278 A Hybrid Film: NiFe₂O₄ Nanoparticles in Poly-3-Hydroxybutyrate as an Antibacterial Agent

Authors: Karen L. Rincon-Granados, América R. Vázquez-Olmos, Adriana-Patricia Rodríguez-Hernández, Gina Prado-Prone, Margarita Rivera, Roberto Y. Sato-Berrú

Abstract:

In this work, a hybrid film based on poly-3-hydroxybutyrate (P3HB) and nickel ferrite (NiFe₂O₄) nanoparticles (NPs) was obtained by a simple and reproducible methodology in order to study its antibacterial and cytotoxic properties. The motivation for this research is the current antimicrobial resistance (RAM). This is a threat to human health and development worldwide. RAM is caused by the emergence of bacterial strains resistant to traditional antibiotics that were used as treatment. Due to this, the need to investigate new alternatives for preventing and treating bacterial infections emerges. In this sense, metal oxide NPs have aroused great interest due to their unique physicochemical properties. However, their use is limited by the nanostructured nature, commonly obtained by chemical and physical synthesis methods, as powders or colloidal dispersions. Therefore, the incorporation of nanostructured materials in polymer matrices to obtain hybrid materials that allow disinfecting and preventing the spread of bacteria on various surfaces. Accordingly, this work presents the synthesis and study of the antibacterial properties of the P3HB@NiFe₂O₄ hybrid film as a potential material to inhibit bacterial growth. The NiFe₂O₄ NPs were previously synthesized by a mechanochemical method. The P3HB and P3HB@NiFe₂O₄ films were obtained by the solvent casting method. The films were characterized by X-ray diffraction (XRD), Raman scattering, and scanning electron microscopy (SEM). The XRD pattern showed that the NiFe₂O₄ NPs were incorporated into the P3HB polymer matrix and retained their nanometric sizes. By energy dispersive X-ray spectroscopy (EDS), it was observed that the NPs are homogeneously distributed in the film. The bactericidal effect of the films obtained was evaluated in vitro using the broth surface method against two opportunistic and nosocomial pathogens, Staphylococcus aureus and Pseudomonas aeruginosa. The bacterial growth results showed that the P3HB@NiFe₂O₄ hybrid film was inhibited by 97% and 96% for S. aureus and P. aeruginosa, respectively. Surprisingly, the P3HB film inhibited both bacterial strains by around 90%. The cytotoxicity of the NiFe₂O₄ NPs, P3HB@NiFe₂O₄ hybrid film, and the P3HB film was evaluated using human skin cells, keratinocytes, and fibroblasts, finding that the NPs are biocompatible. The P3HB film and hybrids are cytotoxic, which demonstrated that although P3HB is known and reported as a biocompatible polymer, under our work conditions, P3HB was cytotoxic. Its bactericidal effect could be related to this activity. Its films are bactericidal and cytotoxic to keratinocytes and fibroblasts, the first barrier of human skin. Despite this, the hybrid film of P3HB@NiFe₂O₄ presents synergy with the bactericidal effect between P3HB and NPs, increasing bacterial inhibition. In addition, NPs decrease the cytotoxicity of P3HB to keratinocytes. The methodology used in this work was successful in producing hybrid films with antibacterial activity. However, future challenges are generated to find relationships between NPs and P3HB that allow taking advantage of their bactericidal properties and do not compromise biocompatibility.

Keywords: poly-3-hydroxybutyrate, nanoparticles, hybrid film, antibacterial

Procedia PDF Downloads 82
277 State, Public Policies, and Rights: Public Expenditure and Social and Welfare Policies in America, as Opposed to Argentina

Authors: Mauro Cristeche

Abstract:

This paper approaches the intervention of the American State in the social arena and the modeling of the rights system from the Argentinian experience, by observing the characteristics of its federal budgetary system, the evolution of social public spending and welfare programs in recent years, labor and poverty statistics, and the changes on the labor market structure. The analysis seeks to combine different methodologies and sources: in-depth interviews with specialists, analysis of theoretical and mass-media material, and statistical sources. Among the results, it could be mentioned that the tendency to state interventionism (what has been called ‘nationalization of social life’) is quite evident in the United States, and manifests itself in multiple forms. The bibliography consulted, and the experts interviewed pointed out this increase of the state presence in historical terms (beyond short-term setbacks) in terms of increase of public spending, fiscal pressure, public employment, protective and control mechanisms, the extension of welfare policies to the poor sectors, etc. In fact, despite the significant differences between both countries, the United States and Argentina have common patterns of behavior in terms of the aforementioned phenomena. On the other hand, dissimilarities are also important. Some of them are determined by each country's own political history. The influence of political parties on the economic model seems more decisive in the United States than in Argentina, where the tendency to state interventionism is more stable. The centrality of health spending is evident in America, while in Argentina that discussion is more concentrated in the social security system and public education. The biggest problem of the labor market in the United States is the disqualification as a consequence of the technological development while in Argentina it is a result of its weakness. Another big difference is the huge American public spending on Defense. Then, the more federal character of the American State is also a factor of differential analysis against a centralized Argentine state. American public employment (around 10%) is comparatively quite lower than the Argentinian (around 18%). The social statistics show differences, but inequality and poverty have been growing as a trend in the last decades in both countries. According to public rates, poverty represents 14% in The United States and 33% in Argentina. American public spending is important (welfare spending and total public spending represent around 12% and 34% of GDP, respectively), but a bit lower than Latin-American or European average). In both cases, the tendency to underemployment and disqualification unemployment does not assume a serious gravity. Probably one of the most important aspects of the analysis is that private initiative and public intervention are much more intertwined in the United States, which makes state intervention more ‘fuzzy’, while in Argentina the difference is clearer. Finally, the power of its accumulation of capital and, more specifically, of the industrial and services sectors in the United States, which continues to be the engine of the economy, express great differences with Argentina, supported by its agro-industrial power and its public sector.

Keywords: state intervention, welfare policies, labor market, system of rights, United States of America

Procedia PDF Downloads 131
276 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 373
275 Perpetrators of Ableist Sexual Violence: Understanding Who They Are and Why They Target People with Intellectual Disabilities in Australia

Authors: Michael Rahme

Abstract:

Over the past decade, there is an overwhelming consensus spanning across academia, government commissions, and civil societies that concede that individuals with disabilities (IWDs), particularly those with intellectual differences, are a demographic most ‘vulnerable’ to experiences of sexual violence. From this global accord, numerous policies have sprouted in the protection of this ‘pregnable’ sector of society, primarily framed around liberal obligations of stewardship over the ‘defenceless.’ As such, these initiatives mainly target post-incident or victim-based factors of sexual violence, which is apparent in proposals for more inclusive sexual education and accessible contact lines for IWDs. Yet despite the necessity of these initiatives, sexual incidents among this demographic persist and, in nations such as Australia, continue to rise. Culture of Violence theory reveals that such discrepancies in theory and practice stem from societal structures that frame individuals as ‘vulnerable’, ‘impregnable’, or ‘defenceless’ because of their disability, thus propagating their own likelihood of abuse. These structures, as embodied by the Australian experience, allow these sexual violences to endure through cultural ideologies that place the IWDs ‘failures’ at fault while sidelining the institutions that permit this abuse. Such is representative of the initiatives of preventative organizations like People with Disabilities Australia, which have singularly strengthened victim protection networks, despite abuse continuing to rise dramatically among individuals with intellectual disabilities alone. Yet regardless of this rise, screenings of families and workers remain inadequate and practically untouched, a reflection of a tremendous societal warp in understanding surrounding the lived experiences of IWDs. This theory is also representative of broader literature, where the study of the perpetrators of disability rights, particularly sexual rights, is almost unapparent in a field that is already seldom studied. Therefore, placing power on the abuser via stripping that of the victims. As such, the Culture of Violence theory (CVT) sheds light on the institutions that allow these perpetrators to prosper. This paper, taking a CVT approach, aims to dissipate this discrepancy in the Australian experience by way of a qualitative analysis of all available court proceedings and tribunals between 2020-2022. Through an analysis of the perpetrator, their relation to the IWD, and the motives for their actions granted by court and tribunal transcripts and the psychological, and behavioural reports, among other material, that have been presented and consulted during these proceedings. All of which would be made available under the 1982 Freedom of Information Act. The findings from this study, through the incorporation of CVT, determine the institutions in which these abusers function and the ideologies which motivate such behaviour; while being conscious of the issue of re-traumatization and language barriers of the abusees. Henceforth, this study aims to be a potential policy guide on strengthening support institutions that provide IWDs with their basic rights. In turn, undermining sexual violence among individuals with intellectual disabilities at its roots.

Keywords: criminal profiling, intellectual disabilities, prevention, sexual violence

Procedia PDF Downloads 93
274 Influence of Cryo-Grinding on Particle Size Distribution of Proso Millet Bran Fraction

Authors: Maja Benkovic, Dubravka Novotni, Bojana Voucko, Duska Curic, Damir Jezek, Nikolina Cukelj

Abstract:

Cryo-grinding is an ultra-fine grinding method used in the pharmaceutical industry, production of herbs and spices and in the production and handling of cereals, due to its ability to produce powders with small particle sizes which maintain their favorable bioactive profile. The aim of this study was to determine the particle size distributions of the proso millet (Panicum miliaceum) bran fraction grinded at cryogenic temperature (using liquid nitrogen (LN₂) cooling, T = - 196 °C), in comparison to non-cooled grinding. Proso millet bran is primarily used as an animal feed, but has a potential in food applications, either as a substrate for extraction of bioactive compounds or raw material in the bakery industry. For both applications finer particle sizes of the bran could be beneficial. Thus, millet bran was ground for 2, 4, 8 and 12 minutes using the ball mill (CryoMill, Retsch GmbH, Haan, Germany) at three grinding modes: (I) without cooling, (II) at cryo-temperature, and (III) at cryo-temperature with included 1 minute of intermediate cryo-cooling step after every 2 minutes of grinding, which is usually applied when samples require longer grinding times. The sample was placed in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm). The oscillation frequency in all three modes was 30 Hz. Particle size distributions of the bran were determined by a laser diffraction particle sizing method (Mastersizer 2000) using the Scirocco 2000 dry dispersion unit (Malvern Instruments, Malvern, UK). Three main effects of the grinding set-up were visible from the results. Firstly, grinding time at all three modes had a significant effect on all particle size parameters: d(0.1), d(0.5), d(0.9), D[3,2], D[4,3], span and specific surface area. Longer grinding times resulted in lower values of the above-listed parameters, e.g. the averaged d(0.5) of the sample (229.57±1.46 µm) dropped to 51.29±1.28 µm after 2 minutes grinding without LN₂, and additionally to 43.00±1.33 µm after 4 minutes of grinding without LN₂. The only exception was the sample ground for 12 minutes without cooling, where an increase in particle diameters occurred (d(0.5)=62.85±2.20 µm), probably due to particles adhering to one another and forming larger particle clusters. Secondly, samples with LN₂ cooling exhibited lower diameters in comparison to non-cooled. For example, after 8 minutes of non-cooled grinding d(0.5)=46.97±1.05 µm was achieved, while the LN₂ cooling enabled collection of particles with average sizes of d(0.5)=18.57±0.18 µm. Thirdly, the application of intermediate cryo-cooling step resulted in similar particle diameters (d(0.5)=15.83±0.36 µm, 12 min of grinding) as cryo-milling without this step (d(0.5)=16.33±2.09 µm, 12 min of grinding). This indicates that intermediate cooling is not necessary for the current application, which consequently reduces the consumption of LN₂. These results point out the potential beneficial effects of millet bran grinding at cryo-temperatures. Further research will show if the lower particle size achieved in comparison to non-cooled grinding could result in increased bioavailability of bioactive compounds, as well as protein digestibility and solubility of dietary fibers of the proso millet bran fraction.

Keywords: ball mill, cryo-milling, particle size distribution, proso millet (Panicum miliaceum) bran

Procedia PDF Downloads 145