Search results for: Just in Time
1491 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator
Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur
Abstract:
Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature
Procedia PDF Downloads 1091490 Molecular Characterization of Listeria monocytogenes from Fresh Fish and Fish Products
Authors: Beata Lachtara, Renata Szewczyk, Katarzyna Bielinska, Kinga Wieczorek, Jacek Osek
Abstract:
Listeria monocytogenes is an important human and animal pathogen that causes foodborne outbreaks. The bacteria may be present in different types of food: cheese, raw vegetables, sliced meat products and vacuum-packed sausages, poultry, meat, fish. The most common method, which has been used for the investigation of genetic diversity of L. monocytogenes, is PFGE. This technique is reliable and reproducible and established as gold standard for typing of L. monocytogenes. The aim of the study was characterization by molecular serotyping and PFGE analysis of L. monocytogenes strains isolated from fresh fish and fish products in Poland. A total of 301 samples, including fresh fish (n = 129) and fish products (n = 172) were, collected between January 2014 and March 2016. The bacteria were detected using the ISO 11290-1 standard method. Molecular serotyping was performed with PCR. The isolates were tested with the PFGE method according to the protocol developed by the European Union Reference Laboratory for L. monocytogenes with some modifications. Based on the PFGE profiles, two dendrograms were generated for strains digested separately with two restriction enzymes: AscI and ApaI. Analysis of the fingerprint profiles was performed using Bionumerics software version 6.6 (Applied Maths, Belgium). The 95% of similarity was applied to differentiate the PFGE pulsotypes. The study revealed that 57 of 301 (18.9%) samples were positive for L. monocytogenes. The bacteria were identified in 29 (50.9%) ready-to-eat fish products and in 28 (49.1%) fresh fish. It was found that 40 (70.2%) strains were of serotype 1/2a, 14 (24.6%) 1/2b, two (4.3%) 4b and one (1.8%) 1/2c. Serotypes 1/2a, 1/2b, and 4b were presented with the same frequency in both categories of food, whereas serotype 1/2c was detected only in fresh fish. The PFGE analysis with AscI demonstrated 43 different pulsotypes; among them 33 (76.7%) were represented by only one strain. The remaining 10 profiles contained more than one isolate. Among them 8 pulsotypes comprised of two L. monocytogenes isolates, one profile of three isolates and one restriction type of 5 strains. In case of ApaI typing, the PFGE analysis showed 27 different pulsotypes including 17 (63.0%) types represented by only one strain. Ten (37.0%) clusters contained more than one strain among which four profiles covered two strains; three had three isolates, one with five strains, one with eight strains and one with ten isolates. It was observed that the isolates assigned to the same PFGE type were usually of the same serotype (1/2a or 1/2b). The majority of the clusters had strains of both sources (fresh fish and fish products) isolated at different time. Most of the strains grouped in one cluster of the AscI restriction was assigned to the same groups in ApaI investigation. In conclusion, PFGE used in the study showed a high genetic diversity among L. monocytogenes. The strains were grouped into varied clonal clusters, which may suggest different sources of contamination. The results demonstrated that 1/2a serotype was the most common among isolates from fresh fish and fish products in Poland.Keywords: Listeria monocytogenes, molecular characteristic, PFGE, serotyping
Procedia PDF Downloads 2891489 Modeling and Simulating Productivity Loss Due to Project Changes
Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier
Abstract:
The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation
Procedia PDF Downloads 2381488 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies
Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof
Abstract:
Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics
Procedia PDF Downloads 1491487 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 891486 Monitoring Soil Moisture Dynamic in Root Zone System of Argania spinosa Using Electrical Resistivity Imaging
Authors: F. Ainlhout, S. Boutaleb, M. C. Diaz-Barradas, M. Zunzunegui
Abstract:
Argania spinosa is an endemic tree of the southwest of Morocco, occupying 828,000 Ha, distributed mainly between Mediterranean vegetation and the desert. This tree can grow in extremely arid regions in Morocco, where annual rainfall ranges between 100-300 mm where no other tree species can live. It has been designated as a UNESCO Biosphere reserve since 1998. Argania tree is of great importance in human and animal feeding of rural population as well as for oil production, it is considered as a multi-usage tree. Admine forest located in the suburbs of Agadir city, 5 km inland, was selected to conduct this work. The aim of the study was to investigate the temporal variation in root-zone moisture dynamic in response to variation in climatic conditions and vegetation water uptake, using a geophysical technique called Electrical resistivity imaging (ERI). This technique discriminates resistive woody roots, dry and moisture soil. Time-dependent measurements (from April till July) of resistivity sections were performed along the surface transect (94 m Length) at 2 m fixed electrode spacing. Transect included eight Argan trees. The interactions between the tree and soil moisture were estimated by following the tree water status variations accompanying the soil moisture deficit. For that purpose we measured midday leaf water potential and relative water content during each sampling day, and for the eight trees. The first results showed that ERI can be used to accurately quantify the spatiotemporal distribution of root-zone moisture content and woody root. The section obtained shows three different layers: middle conductive one (moistured); a moderately resistive layer corresponding to relatively dry soil (calcareous formation with intercalation of marly strata) on top, this layer is interspersed by very resistant layer corresponding to woody roots. Below the conductive layer, we find the moderately resistive layer. We note that throughout the experiment, there was a continuous decrease in soil moisture at the different layers. With the ERI, we can clearly estimate the depth of the woody roots, which does not exceed 4 meters. In previous work on the same species, analyzing the δ18O in water of xylem and in the range of possible water sources, we argued that rain is the main water source in winter and spring, but not in summer, trees are not exploiting deep water from the aquifer as the popular assessment, instead of this they are using soil water at few meter depth. The results of the present work confirm the idea that the roots of Argania spinosa are not growing very deep.Keywords: Argania spinosa, electrical resistivity imaging, root system, soil moisture
Procedia PDF Downloads 3281485 The Potential of Edaphic Algae for Bioremediation of the Diesel-Contaminated Soil
Authors: C. J. Tien, C. S. Chen, S. F. Huang, Z. X. Wang
Abstract:
Algae in soil ecosystems can produce organic matters and oxygen by photosynthesis. Heterocyst-forming cyanobacteria can fix nitrogen to increase soil nitrogen contents. Secretion of mucilage by some algae increases the soil water content and soil aggregation. These actions will improve soil quality and fertility, and further increase abundance and diversity of soil microorganisms. In addition, some mixotrophic and heterotrophic algae are able to degrade petroleum hydrocarbons. Therefore, the objectives of this study were to analyze the effects of algal addition on the degradation of total petroleum hydrocarbons (TPH), diversity and activity of bacteria and algae in the diesel-contaminated soil under different nutrient contents and frequency of plowing and irrigation in order to assess the potential bioremediation technique using edaphic algae. The known amount of diesel was added into the farmland soil. This diesel-contaminated soil was subject to five settings, experiment-1 with algal addition by plowing and irrigation every two weeks, experiment-2 with algal addition by plowing and irrigation every four weeks, experiment-3 with algal and nutrient addition by plowing and irrigation every two weeks, experiment-4 with algal and nutrient addition by plowing and irrigation every four weeks, and the control without algal addition. Soil samples were taken every two weeks to analyze TPH concentrations, diversity of bacteria and algae, and catabolic genes encoding functional degrading enzymes. The results show that the TPH removal rates of five settings after the two-month experimental period were in the order: experiment-2 > expermient-4 > experiment-3 > experiment-1 > control. It indicated that algal addition enhanced the degradation of TPH in the diesel-contaminated soil, but not for nutrient addition. Plowing and irrigation every four weeks resulted in more TPH removal than that every two weeks. The banding patterns of denaturing gradient gel electrophoresis (DGGE) revealed an increase in diversity of bacteria and algae after algal addition. Three petroleum hydrocarbon-degrading algae (Anabaena sp., Oscillatoria sp. and Nostoc sp.) and two added algal strains (Leptolyngbya sp. and Synechococcus sp.) were sequenced from DGGE prominent bands. The four hydrocarbon-degrading bacteria Gordonia sp., Mycobacterium sp., Rodococcus sp. and Alcanivorax sp. were abundant in the treated soils. These results suggested that growth of indigenous bacteria and algae were improved after adding edaphic algae. Real-time polymerase chain reaction results showed that relative amounts of four catabolic genes encoding catechol 2, 3-dioxygenase, toluene monooxygenase, xylene monooxygenase and phenol monooxygenase were appeared and expressed in the treated soil. The addition of algae increased the expression of these genes at the end of experiments to biodegrade petroleum hydrocarbons. This study demonstrated that edaphic algae were suitable biomaterials for bioremediating diesel-contaminated soils with plowing and irrigation every four weeks.Keywords: catabolic gene, diesel, diversity, edaphic algae
Procedia PDF Downloads 2801484 Hybrid Manufacturing System to Produce 3D Structures for Osteochondral Tissue Regeneration
Authors: Pedro G. Morouço
Abstract:
One utmost challenge in Tissue Engineering is the production of 3D constructs capable of mimicking the functional hierarchy of native tissues. This is well stated for osteochondral tissue due to the complex mechanical functional unit based on the junction of articular cartilage and bone. Thus, the aim of the present study was to develop a new additive manufacturing system coupling micro-extrusion with hydrogels printing. An integrated system was developed with 2 main features: (i) the printing of up to three distinct hydrogels; (ii) in coordination with the printing of a thermoplastic structural support. The hydrogel printing module was projected with a ‘revolver-like’ system, where the hydrogel selection was made by a rotating mechanism. The hydrogel deposition was then controlled by pressured air input. The use of specific components approved for medical use was incorporated in the material dispensing system (Nordson EDF Optimum® fluid dispensing system). The thermoplastic extrusion modulus enabled the control of required extrusion temperature through electric resistances in the polymer reservoir and the extrusion system. After testing and upgrades, a hydrogel modulus with 3 syringes (3cm3 capacity each), with a pressure range of 0-2.5bar, a rotational speed of 0-5rpm, and working with needles from 200-800µm was obtained. This modulus was successfully coupled to the extrusion system that presented a temperature up to 300˚C, a pressure range of 0-12bar, and working with nozzles from 200-500µm. The applied motor could provide a velocity range 0-2000mm/min. Although, there are distinct printing requirements for hydrogels and polymers, the novel system could develop hybrid scaffolds, combining the 2 moduli. The morphological analysis showed high reliability (n=5) between the theoretical and obtained filament and pore size (350µm and 300µm vs. 342±4µm and 302±3µm, p>0.05, respectively) of the polymer; and multi-material 3D constructs were successfully obtained. Human tissues present very distinct and complex structures regarding their mechanical properties, organization, composition and dimensions. For osteochondral regenerative medicine, a multiphasic scaffold is required as subchondral bone and overlying cartilage must regenerate at the same time. Thus, a scaffold with 3 layers (bone, intermediate and cartilage parts) can be a promising approach. The developed system may give a suitable solution to construct those hybrid scaffolds with enhanced properties. The present novel system is a step-forward regarding osteochondral tissue engineering due to its ability to generate layered mechanically stable implants through the double-printing of hydrogels with thermoplastics.Keywords: 3D bioprinting, bone regeneration, cartilage regeneration, regenerative medicine, tissue engineering
Procedia PDF Downloads 1661483 The Social Structuring of Mate Selection: Assortative Marriage Patterns in the Israeli Jewish Population
Authors: Naava Dihi, Jon Anson
Abstract:
Love, so it appears, is not socially blind. We show that partner selection is socially constrained, and the freedom to choose is limited by at least two major factors or capitals: on the one hand, material resources and education, locating the partners on a scale of personal achievement and economic independence. On the other, the partners' ascriptive belonging to particular ethnic, or origin, groups, differentiated by the groups' social prestige, as well as by their culture, history and even physical characteristics. However, the relative importance of achievement and ascriptive factors, as well as the overlap between them, varies from society to society, depending on the society's structure and the factors shaping it. Israeli social structure has been shaped by the waves of new immigrants who arrived over the years. The timing of their arrival, their patterns of physical settlement and their occupational inclusion or exclusion have together created a mosaic of social groups whose principal common feature has been the country of origin from which they arrived. The analysis of marriage patterns helps illuminate the social meanings of the groups and their borders. To the extent that ethnic group membership has meaning for individuals and influences their life choices, the ascriptive factor will gain in importance relative to the achievement factor in their choice of marriage partner. In this research, we examine Jewish Israeli marriage patterns by looking at the marriage choices of 5,041 women aged 15 to 49 who were single at the census in 1983, and who were married at the time of the 1995 census, 12 years later. The database for this study was a file linking respondents from the 1983 and the 1995 censuses. In both cases, 5 percent of household were randomly chosen, so that our sample includes about 4 percent of women in Israel in 1983. We present three basic analyses: (1) Who was still single in 1983, using personal and household data from the 1983 census (binomial model), (2) Who married between 1983 and a1995, using personal and household data from the 1983 census (binomial model), (3) What were the personal characteristics of the womens’ partners in 1995, using data from the 1995 census (loglinear model). We show (i) that material and cultural capital both operate to delay marriage and to increase the probability of remaining single; and (ii) while there is a clear association between ethnic group membership and education, endogamy and homogamy both operate as separate forces which constraint (but do not determine) the choice of marriage partner, and thus both serve to reproduce the current pattern of relationships, as well as identifying patterns of proximity and distance between the different groups.Keywords: Israel, nuptiality, ascription, achievement
Procedia PDF Downloads 1151482 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment
Procedia PDF Downloads 771481 Association between TNF-α and Its Receptor TNFRSF1B Polymorphism with Pulmonary Tuberculosis in Tomsk, Russia Federation
Authors: K. A. Gladkova, N. P. Babushkina, E. Y. Bragina
Abstract:
Purpose: Tuberculosis (TB), caused by Mycobacterium tuberculosis, is one of the major public health problems worldwide. It is clear that the immune response to M. tuberculosis infection is a relationship between inflammatory and anti-inflammatory responses in which Tumour Necrosis Factor-α (TNF-α) plays key roles as a pro-inflammatory cytokine. TNF-α involved in various cell immune responses via binding to its two types of membrane-bound receptors, TNFRSF1A and TNFRSF1B. Importantly, some variants of the TNFRSF1B gene have been considered as possible markers of host susceptibility to TB. However, the possible impact of such TNF-α and its receptor genes polymorphism on TB cases in Tomsk is missing. Thus, the purpose of our study was to investigate polymorphism of TNF-α (rs1800629) and its receptor TNFRSF1B (rs652625 and rs525891) genes in population of Tomsk and to evaluate their possible association with the development of pulmonary TB. Materials and Methods: The population distribution features of genes polymorphisms were investigated and made case-control study based on group of people from Tomsk. Human blood was collected during routine patients examination at Tomsk Regional TB Dispensary. Altogether, 234 TB-positive patients (80 women, 154 men, average age is 28 years old) and 205 health-controls (153 women, 52 men, average age is 47 years old) were investigated. DNA was extracted from blood plasma by phenol-chloroform method. Genotyping was carried out by a single-nucleotide-specific real-time PCR assay. Results: First, interpopulational comparison was carried out between healthy individuals from Tomsk and available data from the 1000 Genomes project. It was found that polymorphism rs1800629 region demonstrated that Tomsk population was significantly different from Japanese (P = 0.0007), but it was similar with the following Europeans subpopulations: Italians (P = 0.052), Finns (P = 0.124) and British (P = 0.910). Polymorphism rs525891 clear demonstrated that group from Tomsk was significantly different from population of South Africa (P = 0.019). However, rs652625 demonstrated significant differences from Asian population: Chinese (P = 0.03) and Japanese (P = 0.004). Next, we have compared healthy individuals versus patients with TB. It was detected that no association between rs1800629, rs652625 polymorphisms, and positive TB cases. Importantly, AT genotype of polymorphism rs525891 was significantly associated with resistance to TB (odds ratio (OR) = 0.61; 95% confidence interval (CI): 0.41-0.9; P < 0.05). Conclusion: To the best of our knowledge, the polymorphism of TNFRSF1B (rs525891) was associated with TB, while genotype AT is protective [OR = 0.61] in Tomsk population. In contrast, no significant correlation was detected between polymorphism TNF-α (rs1800629) and TNFRSF1B (rs652625) genes and alveolar TB cases among population of Tomsk. In conclusion, our data expands the molecular particularities associated with TB. The study was supported by the grant of the Russia for Basic Research #15-04-05852.Keywords: polymorphism, tuberculosis, TNF-α, TNFRSF1B gene
Procedia PDF Downloads 1801480 Combained Cultivation of Endemic Strains of Lactic Acid Bacteria and Yeast with Antimicrobial Properties
Authors: A. M. Isakhanyan, F. N. Tkhruni, N. N. Yakimovich, Z. I. Kuvaeva, T. V. Khachatryan
Abstract:
Introduction: At present, the simbiotics based on different genera and species of lactic acid bacteria (LAB) and yeasts are used. One of the basic properties of probiotics is presence of antimicrobial activity and therefore selection of LAB and yeast strains for their co-cultivation with the aim of increasing of the activity is topical. Since probiotic yeast and bacteria have different mechanisms of action, natural synergies between species, higher viability and increasing of antimicrobial activity might be expected from mixing both types of probiotics. Endemic strains of LAB Enterococcus faecium БТK-64, Lactobaccilus plantarum БТK-66, Pediococcus pentosus БТK-28, Lactobacillus rhamnosus БТK-109 and Kluyveromyces lactis БТX-412, Saccharomycopsis sp. БТX- 151 strains of yeast, with probiotic properties and hight antimicrobial activity, were selected. Strains are deposited in "Microbial Depository Center" (MDC) SPC "Armbiotechnology". Methods: LAB and yeast strains were isolated from different dairy products from rural households of Armenia. The genotyping by 16S rRNA sequencing for LAB and 26S RNA sequencing for yeast were used. Combined cultivation of LAB and yeast strains was carried out in the nutrient media on the basis of milk whey, in anaerobic conditions (without shaker, in a thermostat at 37oC, 48 hours). The complex preparations were obtained by purification of cell free culture broth (CFC) broth by the combination of ion-exchange chromatography and gel filtration methods. The spot-on-lawn method was applied for determination of antimicrobial activity and expressed in arbitrary units (AU/ml). Results. The obtained data showed that at the combined growth of bacteria and yeasts, the cultivation conditions (medium composition, time of growth, genera of LAB and yeasts) affected the display of antimicrobial activity. Purification of CFC broth allowed obtaining partially purified antimicrobial complex preparation which contains metabiotics from both bacteria and yeast. The complex preparation inhibited the growth of pathogenic and conditionally pathogenic bacteria, isolated from various internal organs from diseased animals and poultry with greater efficiency than the preparations derived individually alone from yeast and LAB strains. Discussion. Thus, our data shown perspectives of creation of a new class of antimicrobial preparations on the basis of combined cultivation of endemic strains of LAB and yeast. Obtained results suggest the prospect of use of the partially purified complex preparations instead antibiotics in the agriculture and for food safety. Acknowledgments: This work was supported by the RA MES State Committee of Science and Belarus National Foundation for Basic Research in the frames of the joint Armenian - Belarusian joint research project 13РБ-064.Keywords: co-cultivation, antimicrobial activity, biosafety, metabiotics, lactic acid bacteria, yeast
Procedia PDF Downloads 3391479 Improving Health Workers’ Well-Being in Cittadella Hospital (Province of Padua), Italy
Authors: Emanuela Zilli, Suana Tikvina, Davide Bonaldo, Monica Varotto, Scilla Rizzardi, Barbara Ruzzante, Raffaele Napolitano, Stefano Bevilacqua, Antonella Ruffatto
Abstract:
A healthy workplace increases productivity, creativity and decreases absenteeism and turnover. It also contributes to creating a more secure work environment with fewer risks of violence. In the past 3 years, the healthcare system has suffered the psychological, economic and social consequences of the COVID-19 pandemic. On the other hand, the healthcare staff reductions determine high levels of work-related stress that are often unsustainable. The Hospital of Cittadella (in the province of Padua) has 400 beds and serves a territory of 300,000 inhabitants. The hospital itself counts 1.250 healthcare employees (healthcare professionals). This year, the Medical Board of Directors has requested additional staff; however, the economic situation of Italy can not sustain additional hires. At the same time, we have initiated projects that aim to increase well-being, decrease stress and encourage activities that promote self-care. One of the projects that the hospital has organized is the psychomotor practice. It is held by therapists and trainers who operate according to the traditional method. According to the literature, the psychomotor practice is specifically intended for the staff operating in the Intensive Care Unit, Emergency Department and Pneumology Ward. The project consisted of one session of 45 minutes a week for 3 months. This method brings focus to controlled breathing, posture, muscle work and movement that help manage stress and fatigue, creating a more mindful and sustainable lifestyle. In addition, a Qigong course was held every two weeks for 5 months. It is an ancient Chinese practice designed to optimize the energy within the body, reducing stress levels and increasing general well-being. Finally, Tibetan singing crystal bowls sessions, held by a music therapist, consisted of monthly guided meditation sessions using the sounds of the crystal bowls. Sound therapy uses the vibrations created from the crystal bowls to balance the vibrations within the body to promote relaxation. In conclusion, well-being and organizational performance are closely related to each other. It is crucial for any organization to encourage and maintain better physical and mental health of the healthcare staff as it directly affects productivity and, consequently, user satisfaction of the services provided.Keywords: health promotion, healthcare workers management, Weel being and organizational performance, Psychomotor practice
Procedia PDF Downloads 681478 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 3771477 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 1361476 Benefits of Shaping a Balance on Environmental and Economic Sustainability for Population Health
Authors: Edna Negron-Martinez
Abstract:
Our time's global challenges and trends —like those associated with climate change, demographics displacements, growing health inequalities, and increasing burden of diseases— have complex connections to the determinants of health. Information on the burden of disease causes and prevention is fundamental for public health actions, like preparedness and responses for disasters, and recovery resources after the event. For instance, there is an increasing consensus about key findings of the effects and connections of the global burden of disease, as it generates substantial healthcare costs, consumes essential resources and prevents the attainment of optimal health and well-being. The goal of this research endeavor is to promote a comprehensive understanding of the connections between social, environmental, and economic influences on health. These connections are illustrated by pulling from clearly the core curriculum of multidisciplinary areas —as urban design, energy, housing, and economy— as well as in the health system itself. A systematic review of primary and secondary data included a variety of issues as global health, natural disasters, and critical pollution impacts on people's health and the ecosystems. Environmental health is challenged by the unsustainable consumption patterns and the resulting contaminants that abound in many cities and urban settings around the world. Poverty, inadequate housing, and poor health are usually linked. The house is a primary environmental health context for any individual and especially for more vulnerable groups; such as children, older adults and those who are sick. Nevertheless, very few countries show strong decoupling of environmental degradation from economic growth, as indicated by a recent 2017 Report of the World Bank. Worth noting, the environmental fraction of the global burden of disease in a 2016 World Health Organization (WHO) report estimated that 12.6 million global deaths, accounting for 23% (95% CI: 13-34%) of all deaths were attributable to the environment. Among the environmental contaminants include heavy metals, noise pollution, light pollution, and urban sprawl. Those key findings make a call to the significance to urgently adopt in a global scale the United Nations post-2015 Sustainable Development Goals (SDGs). The SDGs address the social, environmental, and economic factors that influence health and health inequalities, advising how these sectors, in turn, benefit from a healthy population. Consequently, more actions are necessary from an inter-sectoral and systemic paradigm to enforce an integrated sustainability policy implementation aimed at the environmental, social, and economic determinants of health.Keywords: building capacity for workforce development, ecological and environmental health effects of pollution, public health education, sustainability
Procedia PDF Downloads 1071475 Incidence and Risk Factors of Traumatic Lumbar Puncture in Newborns in a Tertiary Care Hospital
Authors: Heena Dabas, Anju Paul, Suman Chaurasia, Ramesh Agarwal, M. Jeeva Sankar, Anurag Bajpai, Manju Saksena
Abstract:
Background: Traumatic lumbar puncture (LP) is a common occurrence and causes substantial diagnostic ambiguity. There is paucity of data regarding its epidemiology. Objective: To assess the incidence and risk factors of traumatic LP in newborns. Design/Methods: In a prospective cohort study, all inborn neonates admitted in NICU and planned to undergo LP for a clinical indication of sepsis were included. Neonates with diagnosed intraventricular hemorrhage (IVH) of grade III and IV were excluded. The LP was done by operator - often a fellow or resident assisted by bedside nurse. The unit has policy of not routinely using any sedation/analgesia during the procedure. LP is done by 26 G and 0.5-inch-long hypodermic needle inserted in third or fourth lumbar space while the infant is in lateral position. The infants were monitored clinically and by continuous measurement of vital parameters using multipara monitor during the procedure. The occurrence of traumatic tap along with CSF parameters and other operator and assistant characteristics were recorded at the time of procedure. Traumatic tap was defined as presence of visible blood or more than 500 red blood cells on microscopic examination. Microscopic trauma was defined when CSF is not having visible blood but numerous RBCs. The institutional ethics committee approved the study protocol. A written informed consent from the parents and the health care providers involved was obtained. Neonates were followed up till discharge/death and final diagnosis was assigned along with treating team. Results: A total of 362 (21%) neonates out of 1726 born at the hospital were admitted during the study period (July 2016 to January, 2017). Among these neonates, 97 (26.7%) were suspected of sepsis. A total of 54 neonates were enrolled who met the eligibility criteria and parents consented to participate in the study. The mean (SD) birthweight was 1536 (732) grams and gestational age 32.0 (4.0) weeks. All LPs were indicated for late onset sepsis at the median (IQR) age of 12 (5-39) days. The traumatic LP occurred in 19 neonates (35.1%; 95% C.I 22.6% to 49.3%). Frank blood was observed in 7 (36.8%) and in the remaining, 12(63.1%) CSF was detected to have microscopic trauma. The preliminary risk factor analysis including birth weight, gestational age and operator/assistant and other characteristics did not demonstrate clinically relevant predictors. Conclusion: A significant number of neonates requiring lumbar puncture in our study had high incidence of traumatic tap. We were not able to identify modifiable risk factors. There is a need to understand the reasons and further reduce this issue for improving management in NICUs.Keywords: incidence, newborn, traumatic, lumbar puncture
Procedia PDF Downloads 2971474 Patterns of TV Simultaneous Interpreting of Emotive Overtones in Trump’s Victory Speech from English into Arabic
Authors: Hanan Al-Jabri
Abstract:
Simultaneous interpreting is deemed to be the most challenging mode of interpreting by many scholars. The special constraints involved in this task including time constraints, different linguistic systems, and stress pose a great challenge to most interpreters. These constraints are likely to maximise when the interpreting task is done live on TV. The TV interpreter is exposed to a wide variety of audiences with different backgrounds and needs and is mostly asked to interpret high profile tasks which raise his/her levels of stress, which further complicate the task. Under these constraints, which require fast and efficient performance, TV interpreters of four TV channels were asked to render Trump's victory speech into Arabic. However, they had also to deal with the burden of rendering English emotive overtones employed by the speaker into a whole different linguistic system. The current study aims at investigating the way TV interpreters, who worked in the simultaneous mode, handled this task; it aims at exploring and evaluating the TV interpreters’ linguistic choices and whether the original emotive effect was maintained, upgraded, downgraded or abandoned in their renditions. It also aims at exploring the possible difficulties and challenges that emerged during this process and might have influenced the interpreters’ linguistic choices. To achieve its aims, the study analysed Trump’s victory speech delivered on November 6, 2016, along with four Arabic simultaneous interpretations produced by four TV channels: Al-Jazeera, RT, CBC News, and France 24. The analysis of the study relied on two frameworks: a macro and a micro framework. The former presents an overview of the wider context of the English speech as well as an overview of the speaker and his political background to help understand the linguistic choices he made in the speech, and the latter framework investigates the linguistic tools which were employed by the speaker to stir people’s emotions. These tools were investigated based on Shamaa’s (1978) classification of emotive meaning according to their linguistic level: phonological, morphological, syntactic, and semantic and lexical levels. Moreover, this level investigates the patterns of rendition which were detected in the Arabic deliveries. The results of the study identified different rendition patterns in the Arabic deliveries, including parallel rendition, approximation, condensation, elaboration, transformation, expansion, generalisation, explicitation, paraphrase, and omission. The emerging patterns, as suggested by the analysis, were influenced by factors such as speedy and continuous delivery of some stretches, and highly-dense segments among other factors. The study aims to contribute to a better understanding of TV simultaneous interpreting between English and Arabic, as well as the practices of TV interpreters when rendering emotiveness especially that little is known about interpreting practices in the field of TV, particularly between Arabic and English.Keywords: emotive overtones, interpreting strategies, political speeches, TV interpreting
Procedia PDF Downloads 1591473 Effect of Minimalist Footwear on Running Economy Following Exercise-Induced Fatigue
Authors: Jason Blair, Adeboye Adebayo, Mohamed Saad, Jeannette M. Byrne, Fabien A. Basset
Abstract:
Running economy is a key physiological parameter of an individual’s running efficacy and a valid tool for predicting performance outcomes. Of the many factors known to influence running economy (RE), footwear certainly plays a role owing to its characteristics that vary substantially from model to model. Although minimalist footwear is believed to enhance RE and thereby endurance performance, conclusive research reports are scarce. Indeed, debates remain as to which footwear characteristics most alter RE. The purposes of this study were, therefore, two-fold: (a) to determine whether wearing minimalist shoes results in better RE compared to shod and to identify relationships with kinematic and muscle activation patterns; (b) to determine whether changes in RE with minimalist shoes are still evident following a fatiguing bout of exercise. Well-trained male distance runners (n=10; 29.0 ± 7.5 yrs; 71.0 ± 4.8 kg; 176.3 ± 6.5 cm) partook first in a maximal O₂ uptake determination test (VO₂ₘₐₓ = 61.6 ± 7.3 ml min⁻¹ kg⁻¹) 7 days prior to the experimental sessions. Second, in a fully randomized fashion, an RE test consisting of three 8-min treadmill runs in shod and minimalist footwear were performed prior to and following exercise induced fatigue (EIF). The minimalist and shod conditions were tested with a minimum of 7-day wash-out period between conditions. The RE bouts, interspaced by 2-min rest periods, were run at 2.79, 3.33, and 3.89 m s⁻¹ with a 1% grade. EIF consisted of 7 times 1000 m at 94-97% VO₂ₘₐₓ interspaced with 3-min recovery. Cardiorespiratory, electromyography (EMG), kinematics, rate of perceived exertion (RPE) and blood lactate were measured throughout the experimental sessions. A significant main speed effect on RE (p=0.001) and stride frequency (SF) (p=0.001) was observed. The pairwise comparisons showed that running at 2.79 m s⁻¹ was less economic compared to 3.33, and 3.89 m s⁻¹ (3.56 ± 0.38, 3.41 ± 0.45, 3.40 ± 0.45 ml O₂ kg⁻¹ km⁻¹; respectively) and that SF increased as a function of speed (79 ± 5, 82 ± 5, 84 ± 5 strides min⁻¹). Further, EMG analyses revealed that root mean square EMG significantly increased as a function of speed for all muscles (Biceps femoris, Gluteus maximus, Gastrocnemius, Tibialis anterior, Vastus lateralis). During EIF, the statistical analysis revealed a significant main effect of time on lactate production (from 2.7 ± 5.7 to 11.2 ± 6.2 mmol L⁻¹), RPE scores (from 7.6 ± 4.0 to 18.4 ± 2.7) and peak HR (from 171 ± 30 to 181 ± 20 bpm), expect for the recovery period. Surprisingly, a significant main footwear effect was observed on running speed during intervals (p=0.041). Participants ran faster with minimalist shoes compared to shod (3:24 ± 0:44 min [95%CI: 3:14-3:34] vs. 3:30 ± 0:47 min [95%CI: 3:19-3:41]). Although EIF altered lactate production and RPE scores, no other effect was noticeable on RE, EMG, and SF pre- and post-EIF, except for the expected speed effect. The significant footwear effect on running speed during EIF was unforeseen but could be due to shoe mass and/or heel-toe-drop differences. We also cannot discard the effect of speed on foot-strike pattern and therefore, running performance.Keywords: exercise-induced fatigue, interval training, minimalist footwear, running economy
Procedia PDF Downloads 2481472 Translation and Validation of the Pain Resilience Scale in a French Population Suffering from Chronic Pain
Authors: Angeliki Gkiouzeli, Christine Rotonda, Elise Eby, Claire Touchet, Marie-Jo Brennstuhl, Cyril Tarquinio
Abstract:
Resilience is a psychological concept of possible relevance to the development and maintenance of chronic pain (CP). It refers to the ability of individuals to maintain reasonably healthy levels of physical and psychological functioning when exposed to an isolated and potentially highly disruptive event. Extensive research in recent years has supported the importance of this concept in the CP literature. Increased levels of resilience were associated with lower levels of perceived pain intensity and better mental health outcomes in adults with persistent pain. The ongoing project seeks to include the concept of pain-specific resilience in the French literature in order to provide more appropriate measures for assessing and understanding the complexities of CP in the near future. To the best of our knowledge, there is currently no validated version of the pain-specific resilience measure, the Pain Resilience scale (PRS), for French-speaking populations. Therefore, the present work aims to address this gap, firstly by performing a linguistic and cultural translation of the scale into French and secondly by studying the internal validity and reliability of the PRS for French CP populations. The forward-translation-back translation methodology was used to achieve as perfect a cultural and linguistic translation as possible according to the recommendations of the COSMIN (Consensus-based Standards for the selection of health Measurement Instruments) group, and an online survey is currently conducted among a representative sample of the French population suffering from CP. To date, the survey has involved one hundred respondents, with a total target of around three hundred participants at its completion. We further seek to study the metric properties of the French version of the PRS, ''L’Echelle de Résilience à la Douleur spécifique pour les Douleurs Chroniques'' (ERD-DC), in French patients suffering from CP, assessing the level of pain resilience in the context of CP. Finally, we will explore the relationship between the level of pain resilience in the context of CP and other variables of interest commonly assessed in pain research and treatment (i.e., general resilience, self-efficacy, pain catastrophising, and quality of life). This study will provide an overview of the methodology used to address our research objectives. We will also present for the first time the main findings and further discuss the validity of the scale in the field of CP research and pain management. We hope that this tool will provide a better understanding of how CP-specific resilience processes can influence the development and maintenance of this disease. This could ultimately result in better treatment strategies specifically tailored to individual needs, thus leading to reduced healthcare costs and improved patient well-being.Keywords: chronic pain, pain measure, pain resilience, questionnaire adaptation
Procedia PDF Downloads 901471 Redesigning Clinical and Nursing Informatics Capstones
Authors: Sue S. Feldman
Abstract:
As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.Keywords: capstones, clinical informatics, health informatics, informatics
Procedia PDF Downloads 1331470 Retrofitting Insulation to Historic Masonry Buildings: Improving Thermal Performance and Maintaining Moisture Movement to Minimize Condensation Risk
Authors: Moses Jenkins
Abstract:
Much of the focus when improving energy efficiency in buildings fall on the raising of standards within new build dwellings. However, as a significant proportion of the building stock across Europe is of historic or traditional construction, there is also a pressing need to improve the thermal performance of structures of this sort. On average, around twenty percent of buildings across Europe are built of historic masonry construction. In order to meet carbon reduction targets, these buildings will require to be retrofitted with insulation to improve their thermal performance. At the same time, there is also a need to balance this with maintaining the ability of historic masonry construction to allow moisture movement through building fabric to take place. This moisture transfer, often referred to as 'breathable construction', is critical to the success, or otherwise, of retrofit projects. The significance of this paper is to demonstrate that substantial thermal improvements can be made to historic buildings whilst avoiding damage to building fabric through surface or interstitial condensation. The paper will analyze the results of a wide range of retrofit measures installed to twenty buildings as part of Historic Environment Scotland's technical research program. This program has been active for fourteen years and has seen interventions across a wide range of building types, using over thirty different methods and materials to improve the thermal performance of historic buildings. The first part of the paper will present the range of interventions which have been made. This includes insulating mass masonry walls both internally and externally, warm and cold roof insulation and improvements to floors. The second part of the paper will present the results of monitoring work which has taken place to these buildings after being retrofitted. This will be in terms of both thermal improvement, expressed as a U-value as defined in BS EN ISO 7345:1987, and also, crucially, will present the results of moisture monitoring both on the surface of masonry walls the following retrofit and also within the masonry itself. The aim of this moisture monitoring is to establish if there are any problems with interstitial condensation. This monitoring utilizes Interstitial Hygrothermal Gradient Monitoring (IHGM) and similar methods to establish relative humidity on the surface of and within the masonry. The results of the testing are clear and significant for retrofit projects across Europe. Where a building is of historic construction the use of materials for wall, roof and floor insulation which are permeable to moisture vapor provides both significant thermal improvements (achieving a u-value as low as 0.2 Wm²K) whilst avoiding problems of both surface and intestinal condensation. As the evidence which will be presented in the paper comes from monitoring work in buildings rather than theoretical modeling, there are many important lessons which can be learned and which can inform retrofit projects to historic buildings throughout Europe.Keywords: insulation, condensation, masonry, historic
Procedia PDF Downloads 1731469 Influence of the Nature of Plants on Drainage, Purification Performance and Quality of Biosolids on Faecal Sludge Planted Drying Beds in Sub-Saharan Climate Conditions
Authors: El Hadji Mamadou Sonko, Mbaye Mbéguéré, Cheikh Diop, Linda Strande
Abstract:
In new approaches that are being developed for the treatment of sludge, the valorization of by-product is increasingly encouraged. In this perspective, Echinochloa pyramidalis has been successfully tested in Cameroon. Echinochloa pyramidalis is an efficient forage plant in the treatment of faecal sludge. It provides high removal rates and biosolids of high agronomic value. Thus in order to advise the use of this plant in planted drying beds in Senegal its comparison with the plants long been used in the field deserves to be carried out. That is the aim of this study showing the influence of the nature of the plants on the drainage, the purifying performances and the quality of the biosolids. Echinochloa pyramidalis, Typha australis, and Phragmites australis are the three macrophytes used in this study. The drainage properties of the beds were monitored through the frequency of clogging, the percentage of recovered leachate and the dryness of the accumulated sludge. The development of plants was followed through the measurement of the density. The purification performances were evaluated from the incoming raw sludge flows and the outflows of leachate for parameters such as Total Solids (TS), Total Suspended Solids (TSS), Total Volatile Solids (TVS), Chemical Oxygen Demand (COD), Total Kjeldahl Nitrogen (TKN), Ammonia (NH₄⁺), Nitrate (NO₃⁻), Total Phosphorus (TP), Orthophosphorus (PO₄³⁻) and Ascaris eggs. The quality of the biosolids accumulated on the beds was measured after 3 months of maturation for parameters such as dryness, C/N ratio NH₄⁺/NO₃⁻ ratio, ammonia, Ascaris eggs. The results have shown that the recovered leachate volume is about 40.4%; 45.6% and 47.3%; the dryness about 41.7%; 38.7% and 28.7%, and clogging frequencies about 6.7%; 8.2% and 14.2% on average for the beds planted with Echinochloa pyramidalis, Typha australis and Phragmites australis respectively. The plants of Echinochloa pyramidalis (198.6 plants/m²) and Phragmites australis (138 plants/m²) have higher densities than Typha australis (90.3 plants/m²). The nature of the plants has no influence on the purification performance with reduction percentages around 80% or more for all the parameters followed whatever the nature of the plants. However, the concentrations of these various leachate pollutants are above the limit values of the Senegalese standard NS 05-061 for the release into the environment. The biosolids harvested after 3 months of maturation are all mature with C/N ratios around 10 for all the macrophytes. The NH₄⁺/NO₃⁻ ratio is lower than 1 except for the biosolids originating from the Echinochloa pyramidalis beds. The ammonia is also less than 0.4 g/kg except for biosolids from Typha australis beds. Biosolids are also rich in mineral elements. Their concentrations of Ascaris eggs are higher than the WHO recommendations despite a percentage of inactivation around 80%. These biosolids must be stored for an additional time or composted. From these results, the use of Echinochloa pyramidalis as the main macrophyte can be recommended in the various drying beds planted in sub-Saharan climate conditions.Keywords: faecal sludge, nature of plants, quality of biosolids, treatment performances
Procedia PDF Downloads 1701468 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 2961467 Act Local, Think Global: Superior Institute of Engineering of Porto Campaign for a Sustainable Campus
Authors: R. F. Mesquita Brandão
Abstract:
Act Local, Think Global is the name of a campaign implemented at Superior Institute of Engineering of Porto (ISEP), one of schools of Polytechnic of Porto, with the main objective of increase the sustainability of the campus. ISEP has a campus with 52.000 m2 and more than 7.000 students. The campaign started in 2019 and the results are very clear. In 2019 only 16% of the waste created in the campus was correctly separate for recycling and now almost 50% of waste goes to the correct waste container. Actions to reduce the energy consumption were implemented with significantly results. One of the major problems in the campus are the water leaks. To solve this problem was implemented a methodology for water monitoring during the night, a period of time where consumptions are normally low. If water consumption in the period is higher than a determinate value it may mean a water leak and an alarm is created to the maintenance teams. In terms of energy savings, some measurements were implemented to create savings in energy consumption and in equivalent CO₂ produced. In order to reduce the use of plastics in the campus, was implemented the prohibition of selling 33 cl plastic water bottles and in collaboration with the students association all meals served in the restaurants changed the water plastic bottle for a glass that can be refilled with water in the water dispensers. This measures created a reduction of use of more than 75.000 plastic bottles per year. In parallel was implemented the ISEP water glass bottle to be used in all scientific meetings and events. Has a way of involving all community in sustainability issues was developed and implemented a vertical garden in aquaponic system. In 2019, the first vertical garden without soil was installed inside a large campus building. The system occupies the entire exterior façade (3 floors) of the entrance to ISEP's G building. On each of these floors there is a planter with 42 positions available for plants. Lettuces, strawberries, peppers are examples of some vegetable produced that can be collected by the entire community. Associated to the vertical garden was developed a monitoring system were some parameters of the system are monitored. This project is under development because it will work in a stand-alone energy feeding, with the use of photovoltaic panels for production of energy necessities. All the system was, and still is, developed by students and teachers and is used in class projects of some ISEP courses. These and others measures implemented in the campus, will be more developed in the full paper, as well as all the results obtained, allowed ISEP to be the first Portuguese high school to obtain the certification “Coração Verde” (Green Heart), awarded by LIPOR, a Portuguese company with the mission of transform waste into new resources through the implementation of innovative and circular practices, generating and sharing value.Keywords: aquaponics, energy efficiency, recycling, sustainability, waste separation
Procedia PDF Downloads 941466 Placement Characteristics of Major Stream Vehicular Traffic at Median Openings
Authors: Tathagatha Khan, Smruti Sourava Mohapatra
Abstract:
Median openings are provided in raised median of multilane roads to facilitate U-turn movement. The U-turn movement is a highly complex and risky maneuver because U-turning vehicle (minor stream) makes 180° turns at median openings and merge with the approaching through traffic (major stream). A U-turning vehicle requires a suitable gap in the major stream to merge, and during this process, the possibility of merging conflict develops. Therefore, these median openings are potential hot spot of conflict and posses concern pertaining to safety. The traffic at the median openings could be managed efficiently with enhanced safety when the capacity of a traffic facility has been estimated correctly. The capacity of U-turns at median openings is estimated by Harder’s formula, which requires three basic parameters namely critical gap, follow up time and conflict flow rate. The estimation of conflicting flow rate under mixed traffic condition is very much complicated due to absence of lane discipline and discourteous behavior of the drivers. The understanding of placement of major stream vehicles at median opening is very much important for the estimation of conflicting traffic faced by U-turning movement. The placement data of major stream vehicles at different section in 4-lane and 6-lane divided multilane roads were collected. All the test sections were free from the effect of intersection, bus stop, parked vehicles, curvature, pedestrian movements or any other side friction. For the purpose of analysis, all the vehicles were divided into 6 categories such as motorized 2W, autorickshaw (3-W), small car, big car, light commercial vehicle, and heavy vehicle. For the collection of placement data of major stream vehicles, the entire road width was divided into sections of 25 cm each and these were numbered seriatim from the pavement edge (curbside) to the end of the road. The placement major stream vehicle crossing the reference line was recorded by video graphic technique on various weekdays. The collected data for individual category of vehicles at all the test sections were converted into a frequency table with a class interval of 25 cm each and the placement frequency curve. Separate distribution fittings were tried for 4- lane and 6-lane divided roads. The variation of major stream traffic volume on the placement characteristics of major stream vehicles has also been explored. The findings of this study will be helpful to determine the conflict volume at the median openings. So, the present work holds significance in traffic planning, operation and design to alleviate the bottleneck, prospect of collision and delay at median opening in general and at median opening in developing countries in particular.Keywords: median opening, U-turn, conflicting traffic, placement, mixed traffic
Procedia PDF Downloads 1381465 Bundling of Transport Flows: Adoption Barriers and Opportunities
Authors: Vandenbroucke Karel, Georges Annabel, Schuurman Dimitri
Abstract:
In the past years, bundling of transport flows, whether or not implemented in an intermodal process, has popped up as a promising concept in the logistics sector. Bundling of transport flows is a process where two or more shippers decide to synergize their shipped goods over a common transport lane. Promoted by the European Commission, several programs have been set up and have shown their benefits. Bundling promises both shippers and logistics service providers economic, societal and ecological benefits. By bundling transport flows and thus reducing truck (or other carrier) capacity, the problems of driver shortage, increased fuel prices, mileage charges and restricted hours of service on the road are solved. In theory, the advantages of bundled transport exceed the drawbacks, however, in practice adoption among shippers remains low. In fact, bundling is mentioned as a disruptive process in the rather traditional logistics sector. In this context, a Belgian company asked iMinds Living Labs to set up a Living Lab research project with the goal to investigate how the uptake of bundling transport flows can be accelerated and to check whether an online data sharing platform can overcome the adoption barriers. The Living Lab research was conducted in 2016 and combined quantitative and qualitative end-user and market research. Concretely, extensive desk research was conducted and combined with insights from expert interviews with four consultants active in the Belgian logistics sector and in-depth interviews with logistics professionals working for shippers (N=10) and LSP’s (N=3). In the article, we present findings which show that there are several factors slowing down the uptake of bundling transport flows. Shippers are hesitant to change how they currently work and they are hesitant to work together with other shippers. Moreover, several practical challenges impede shippers to work together. We also present some opportunities that can accelerate the adoption of bundling of transport flows. First, it seems that there is not enough support coming from governmental and commercial organizations. Secondly, there is the chicken and the egg problem: too few interested parties will lead to no or very few matching lanes. Shippers are therefore reluctant to partake in these projects because the benefits have not yet been proven. Thirdly, the incentive is not big enough for shippers. Road transport organized by the shipper individually is still seen as the easiest and cheapest solution. A solution for the abovementioned challenges might be found in the online data sharing platform of the Belgian company. The added value of this platform is showing shippers possible matching lanes, without the shippers having to invest time in negotiating and networking with other shippers and running the risk of not finding a match. The interviewed shippers and experts indicated that the online data sharing platform is a very promising concept which could accelerate the uptake of bundling of transport flows.Keywords: adoption barriers, bundling of transport, shippers, transport optimization
Procedia PDF Downloads 2001464 Enhancing Institutional Roles and Managerial Instruments for Irrigation Modernization in Sudan: The Case of Gezira Scheme
Authors: Mohamed Ahmed Abdelmawla
Abstract:
Calling to achieve Millennium Development Goals (MDGs) engaged with agriculture, i.e. poverty alleviation targets, human resources involved in agricultural sectors with special emphasis on irrigation must receive wealth of practical experience and training. Increased food production, including staple food, is needed to overcome the present and future threats to food security. This should happen within a framework of sustainable management of natural resources, elimination of unsustainable methods of production and poverty reduction (i.e. axes of modernization). A didactic tool to confirm the task of wise and maximum utility is the best management and accurate measurement, as major requisites for modernization process. The key component to modernization as a warranted goal is adhering great attention to management and measurement issues via capacity building. As such, this paper stressed the issues of discharge management and measurement by Field Outlet Pipes (FOP) for selected ones within the Gezira Scheme, where randomly nine FOPs were selected as representative locations. These FOPs extended along the Gezira Main Canal at Kilo 57 areas in the South up to Kilo 194 in the North. The following steps were followed during the field data collection and measurements: For each selected FOP, a 90 v- notch thin plate weir was placed in such away that the water was directed to pass only through the notch. An optical survey level was used to measure the water head of the notch and FOP. Both calculated discharge rates as measured by the v – notch, denoted as [Qc], and the adopted discharges given by (MOIWR), denoted as [Qa], are tackled for the average of three replicated readings undertaken at each location. The study revealed that the FOP overestimates and sometimes underestimates the discharges. This is attributed to the fact that the original design specifications were not fulfilled or met at present conditions where water is allowed to flow day and night with high head fluctuation, knowing that the FOP is non modular structure, i.e. the flow depends on both levels upstream and downstream and confirmed by the results of this study. It is convenient and formative to quantify the discharge in FOP with weirs or Parshall flumes. Cropping calendar should be clearly determined and agreed upon before the beginning of the season in accordance and consistency with the Sudan Gezira Board (SGB) and Ministry of Irrigation and Water Resources. As such, the water indenting should be based on actual Crop Water Requirements (CWRs), not on rules of thumb (420 m3/feddan, irrespective of crop or time of season).Keywords: management, measurement, MDGs, modernization
Procedia PDF Downloads 2511463 Addressing Supply Chain Data Risk with Data Security Assurance
Authors: Anna Fowler
Abstract:
When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.Keywords: security by design, data security architecture, cybersecurity framework, data security assurance
Procedia PDF Downloads 891462 'Naming, Blaming, Shaming': Sexual Assault Survivors' Perceptions of the Practice of Shaming
Authors: Anat Peleg, Hadar Dancig-Rosenberg
Abstract:
This interdisciplinary study, to our knowledge the first in this field, is located on the intersection of victimology-law and society-and media literature, and it corresponds both with feminist writing and with cyber literature which explores the techno-social sphere. It depicts the multifaceted dimensions of shaming in the eyes of the survivors through the following research questions: What are the motivations of sexual-assault survivors to publicize the assailants' identity or to refrain from this practice? Is shaming on Facebook perceived by sexual–assault victims as a substitute for the CJS or as a new form of social activism? What positive and negative consequences do survivors experience as a result of shaming their assailants online? The study draws on in-depth semi-structured interviews which we have conducted between 2016-2018 with 20 sexual-assaults survivors who exposed themselves on Facebook. They were sexually attacked in various forms: six participants reported that they had been raped when they were minors; eight women reported that they had been raped as adults; three reported that they had been victims of an indecent act and three reported that they had been harassed either in their workplace or in the public sphere. Most of our interviewees (12) reported to the police and were involved in criminal procedures. More than half of the survivors (11) disclosed the identity of their attackers online. The vocabularies of motives that have emerged from the thematic analysis of the interviews with the survivors consist of both social and personal motivations for using the practice of shaming online. Some survivors maintain that the use of shaming derives from the decline in the public trust in the criminal justice system. It reflects demand for accountability and justice and serves also as a practice of warning other potential victims of the assailants. Other survivors assert that shaming people in a position of privilege is meant to fulfill the public right to know who these privileged men really are. However, these aforementioned moral and practical justifications of the practice of shaming are often mitigated by fear from the attackers' physical or legal actions in response to their allegations. Some interviewees who are feminist activists argue that the practice of shaming perpetuates the social ancient tendency to define women by labels linking them to the men who attacked them, instead of being defined by their own life complexities. The variety of motivations to adopt or resent the practice of shaming by sexual assault victims presented in our study appear to refute the prevailing intuitive stereotype that shaming is an irrational act of revenge, and denote its rationality. The role of social media as an arena for seeking informal justice raises questions about the new power relations created between victims, assailants, the community and the State, outside the formal criminal justice system. At the same time, the survivors' narratives also uncover the risks and pitfalls embedded within the online sphere for sexual assault survivors.Keywords: criminal justice, gender, Facebook, sexual-assaults
Procedia PDF Downloads 112