Search results for: muzzle flow fields
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6851

Search results for: muzzle flow fields

1481 The Representation of Young Sports Heroines in Cinema: Analysis of a Regressive Portrayal of Young Sportswomen on the Screen

Authors: David Sudre

Abstract:

Sport in cinema, like sport in society, has been mainly concerned with men and masculinity. Whether in the boxing ring, on the basketball playgrounds, or on the soccer fields, these films have mostly focused on the trials and tribulations of male athletes, for whom women have very generally played secondary, often devalued and devaluing roles, such as that of the loving and indispensable woman to the victorious athlete, that of the dangerous femme fatale, or that of the woman as a sexual object. For more than a century, this film genre has, on the contrary, symbolized the dominant values of patriotism, heroism and contributed at the same time to build an ideal of hegemonic masculinity. With the exception of films such as The Grand National (1944) and Million Dollar Baby (2004), the most commercially successful films tell the story of men's adventures in sports. Today, thanks in part to the struggles of the feminist movement and subsequent societal advances, we are seeing an increase in the number of women in increasingly prominent roles in sports films. Indeed, there seems to be a general shift in popular cinema toward women playing major characters in big-budget productions that have also achieved critical and commercial success. However, if, at first sight, the increase in the number of roles given to women suggests an evolution and a more positive image of them on the screen, it will be necessary to see how their representation is really characterized when they are young and occupy major roles in this type of film. In order to answer this question, we will rely on the results of research conducted on a corpus of 28 sports films in which a young woman plays the main role in the story. All of these productions are fictional (not documentary), mostly American, and distributed by major film studios. The chosen sports teen movies are among the biggest commercial successes of the genre and aim to make the maximum profit and occupy the most dominant positions within the "commercial pole" of the cinematic field. Therefore, this research will allow us, although a change has taken place in the last decades in the number of main roles granted to sportswomen, to decode the sociological subtext of these popular sports films for teenagers. The aim is to reveal how these sports films convey a conservative ideology that participates, on the one hand, in the maintenance of patriarchy and, on the other hand, in the dissemination of stereotyped, negative, and regressive images of young women athletes.

Keywords: cinema, sport, gender, youth, representations, inequality, stereotypes

Procedia PDF Downloads 69
1480 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review

Authors: Anicet Dansou

Abstract:

Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.

Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete

Procedia PDF Downloads 109
1479 Measurements for Risk Analysis and Detecting Hazards by Active Wearables

Authors: Werner Grommes

Abstract:

Intelligent wearables (illuminated vests or hand and foot-bands, smart watches with a laser diode, Bluetooth smart glasses) overflow the market today. They are integrated with complex electronics and are worn very close to the body. Optical measurements and limitation of the maximum light density are needed. Smart watches are equipped with a laser diode or control different body currents. Special glasses generate readable text information that is received via radio transmission. Small high-performance batteries (lithium-ion/polymer) supply the electronics. All these products have been tested and evaluated for risk. These products must, for example, meet the requirements for electromagnetic compatibility as well as the requirements for electromagnetic fields affecting humans or implant wearers. Extensive analyses and measurements were carried out for this purpose. Many users are not aware of these risks. The result of this study should serve as a suggestion to do it better in the future or simply to point out these risks. Commercial LED warning vests, LED hand and foot-bands, illuminated surfaces with inverter (high voltage), flashlights, smart watches, and Bluetooth smart glasses were checked for risks. The luminance, the electromagnetic emissions in the low-frequency as well as in the high-frequency range, audible noises, and nervous flashing frequencies were checked by measurements and analyzed. Rechargeable lithium-ion or lithium-polymer batteries can burn or explode under special conditions like overheating, overcharging, deep discharge or using out of the temperature specification. Some risk analysis becomes necessary. The result of this study is that many smart wearables are worn very close to the body, and an extensive risk analysis becomes necessary. Wearers of active implants like a pacemaker or implantable cardiac defibrillator must be considered. If the wearable electronics include switching regulators or inverter circuits, active medical implants in the near field can be disturbed. A risk analysis is necessary.

Keywords: safety and hazards, electrical safety, EMC, EMF, active medical implants, optical radiation, illuminated warning vest, electric luminescent, hand and head lamps, LED, e-light, safety batteries, light density, optical glare effects

Procedia PDF Downloads 110
1478 Intersubjectivity of Forensic Handwriting Analysis

Authors: Marta Nawrocka

Abstract:

In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.

Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods

Procedia PDF Downloads 149
1477 Properties of Biodiesel Produced by Enzymatic Transesterification of Lipids Extracted from Microalgae in Supercritical Carbon Dioxide Medium

Authors: Hanifa Taher, Sulaiman Al-Zuhair, Ali H. Al-Marzouqi, Yousef Haik, Mohammed Farid

Abstract:

Biodiesel, as an alternative renewable fuel, has been receiving increasing attention due to the limited supply of fossil fuels and the increasing need for energy. Microalgae is a promising source for lipids, which can be converted to biodiesel. The biodiesel production from microalgae lipids using lipase catalyzed reaction in supercritical CO2 medium has several advantages over conventional production processes. However, identifying the optimum microalgae lipid extraction and transesterification conditions is still a challenge. In this study, the lipids extracted from Scenedesmus sp. and their enzymatic transesterification using supercritical carbon dioxide have been investigated. The effect of extraction variables (temperature, pressure and solvent flow rate) and reaction variables (enzyme loading, incubation time, methanol to lipids molar ratio and temperature) were considered. Process parameters and their effects were studied using a full factorial analysis of both. Response Surface Methodology (RSM) and was used to determine the optimum conditions for the extraction and reaction steps. For extraction, the optimum conditions were 53 °C and 500 bar, whereas for the reaction the optimum conditions were 35% enzyme loading, 4 h reaction, 9:1 molar ratio and 50 oC. At these optimum conditions, the highest biodiesel production yield was found to be 82 %. The fuel properties of the produced biodiesel, at optimum reaction condition, were determined and compared to ASTM standards. The properties were found to comply with the limits, and showed a low glycerol content, without any separation step.

Keywords: biodiesel, lipase, supercritical CO2, standards

Procedia PDF Downloads 492
1476 Potential Applications of Biosurfactants from Corn Steep Liquor in Cosmetic

Authors: J. M. Cruz, X. Vecıno, L. Rodrıguez-López, J. M. Dominguez, A. B. Moldes

Abstract:

The cosmetic and personal care industry are the fields where biosurfactants could have more possibilities of success because in this kind of products the replacement of synthetic detergents by natural surfactants will provide an additional added value to the product, at the same time that the harmful effects produced by some synthetic surfactants could be avoided or reduced. Therefore, nowadays, consumers are disposed to pay and additional cost if they obtain more natural products. In this work we provide data about the potential of biosurfactants in the cosmetic and personal care industry. Biosurfactants from corn steep liquor, that is a fermented and condensed stream, have showed good surface-active properties, reducing substantially the surface tension of water. The bacteria that usually growth in corn steep liquor comprises Lactobacillus species, generally recognize as safe. The biosurfactant extracted from CSL consists of a lipopeptide, composed by fatty acids, which can reduce the surface tension of water in more than 30 units. It is a yellow and viscous liquid with a density of 1.053 mg/mL and pH=4. By these properties, they could be introduced in the formulation of cosmetic creams, hair conditioners or shampoos. Moreover this biosurfactant extracted from corn steep liquor, have showed a potent antimicrobial effect on different strains of Streptococcus. Some species of Streptococcus are commonly found weakly living in the human respiratory and genitourinary systems, producing several diseases in humans, including skin diseases. For instance, Streptococcus pyogenes produces many toxins and enzymes that help to stabilize skin infections; probably biosurfactants from corn steep liquor can inhibit the mechanisms of the S. pyogenes enzymes. S. pyogenes is an important cause of pharyngitis, impetigo, cellulitis and necrotizing fasciitis. In this work it was observed that 50 mg/L of biosurfactant extract obtained from corn steep liquor is able to inhibit more than 50% the growth of S. pyogenes. Thus, cosmetic and personal care products, formulated with biosurfactants from corn steep liquor, could have prebiotic properties. The natural biosurfactant presented in this work and obtained from corn milling industry streams, have showed a high potential to provide an interesting and sustainable alternative to those, antibacterial and surfactant ingredients used in cosmetic and personal care manufacture, obtained by chemical synthesis, which can cause irritation, and often only show short time effects.

Keywords: antimicrobial activity, biosurfactants, cosmetic, personal care

Procedia PDF Downloads 257
1475 Nanoparticle Emission Characteristics during Methane Pyrolysis in a Laminar Premixed Flame

Authors: Mohammad Javad Afroughi, Farjad Falahati, Larry W. Kostiuk, Jason S. Olfert

Abstract:

This study investigates the physical characteristics of nanoparticles generated during pyrolysis of methane in hot products of a premixed propane-air flame. An inverted burner is designed to provide a laminar premixed propane-air flame (35 SLPM) then introduce methane co-flow to be pyrolyzed within a closed cylindrical chamber (20 cm in diameter and 68 cm in length). The formed products are discharged through an exhaust with a sampling branch to measure emission characteristics. Carbon particles are sampled with a preheated nitrogen dilution system, and the size distribution of particles formed by pyrolysis is measured by a scanning mobility particle sizer (SMPS). Dilution ratio is calculated using simultaneously measured CO2 concentrations in the exhaust products and diluted samples. Results show that particle size distribution (PSD) is strongly affected by dilution ratio and preheating temperature. PSD becomes unstable at high dilution ratios (typically above 700 times) and/or low preheating temperatures (below 40° C). At a suitable dilution ratio of 55 and preheating temperature up to 70° C, the median diameter of PSD increases from 20 to 220 nm following the introduction of 0.5 SLPM of methane to the propane-air premixed flame. Furthermore, with pyrolysis of methane, total particle number concentration and estimated total mass concentration of particles in the size range of 14 to 700 nm, increase from 1.12 to 3.90 *107 cm-3 and from 0.11 to 154 µg L-1, respectively.

Keywords: laminar premixed flame, methane pyrolysis, nanoparticle physical characteristics, particle mass concentration, particle number concentration, particle size distribution (PSD)

Procedia PDF Downloads 240
1474 An Empirical Study of the Effect of Robot Programming Education on the Computational Thinking of Young Children: The Role of Flowcharts

Authors: Wei Sun, Yan Dong

Abstract:

There is an increasing interest in introducing computational thinking at an early age. Computational thinking, like mathematical thinking, engineering thinking, and scientific thinking, is a kind of analytical thinking. Learning computational thinking skills is not only to improve technological literacy, but also allows learners to equip with practicable skills such as problem-solving skills. As people realize the importance of computational thinking, the field of educational technology faces a problem: how to choose appropriate tools and activities to help students develop computational thinking skills. Robots are gradually becoming a popular teaching tool, as robots provide a tangible way for young children to access to technology, and controlling a robot through programming offers them opportunities to engage in developing computational thinking. This study explores whether the introduction of flowcharts into the robotics programming courses can help children convert natural language into a programming language more easily, and then to better cultivate their computational thinking skills. An experimental study was adopted with a sample of children ages six to seven (N = 16) participated, and a one-meter-tall humanoid robot was used as the teaching tool. Results show that children can master basic programming concepts through robotic courses. Children's computational thinking has been significantly improved. Besides, results suggest that flowcharts do have an impact on young children’s computational thinking skills development, but it only has a significant effect on the "sequencing" and "correspondence" skills. Overall, the study demonstrates that the humanoid robot and flowcharts have qualities that foster young children to learn programming and develop computational thinking skills.

Keywords: robotics, computational thinking, programming, young children, flow chart

Procedia PDF Downloads 147
1473 Structural Equation Modeling Exploration for the Multiple College Admission Criteria in Taiwan

Authors: Tzu-Ling Hsieh

Abstract:

When the Taiwan Ministry of Education implemented a new university multiple entrance policy in 2002, most colleges and universities still use testing scores as mainly admission criteria. With forthcoming 12 basic-year education curriculum, the Ministry of Education provides a new college admission policy, which will be implemented in 2021. The new college admission policy will highlight the importance of holistic education by more emphases on the learning process of senior high school, except only on the outcome of academic testing. However, the development of college admission criteria doesn’t have a thoughtful process. Universities and colleges don’t have an idea about how to make suitable multi-admission criteria. Although there are lots of studies in other countries which have implemented multi-college admission criteria for years, these studies still cannot represent Taiwanese students. Also, these studies are limited without the comparison of two different academic fields. Therefore, this study investigated multiple admission criteria and its relationship with college success. This study analyzed the Taiwan Higher Education Database with 12,747 samples from 156 universities and tested a conceptual framework that examines factors by structural equation model (SEM). The conceptual framework of this study was adapted from Pascarella's general causal model and focused on how different admission criteria predict students’ college success. It discussed the relationship between admission criteria and college success, also the relationship how motivation (one of admission standard) influence college success through engagement behaviors of student effort and interactions with agents of socialization. After processing missing value, reliability and validity analysis, the study found three indicators can significantly predict students’ college success which was defined as average grade of last semester. These three indicators are the Chinese language scores at college entrance exam, high school class rank, and quality of student academic engagement. In addition, motivation can significantly predict quality of student academic engagement and interactions with agents of socialization. However, the multi-group SEM analysis showed that there is no difference to predict college success between the students from liberal arts and science. Finally, this study provided some suggestions for universities and colleges to develop multi-admission criteria through the empirical research of Taiwanese higher education students.

Keywords: college admission, admission criteria, structural equation modeling, higher education, education policy

Procedia PDF Downloads 179
1472 Spirometric Reference Values in 236,606 Healthy, Non-Smoking Chinese Aged 4–90 Years

Authors: Jiashu Shen

Abstract:

Objectives: Spirometry is a basic reference for health evaluation which is widely used in clinical. Previous reference of spirometry is not applicable because of drastic changes of social and natural circumstance in China. A new reference values for the spirometry of the Chinese population is extremely needed. Method: Spirometric reference value was established using the statistical modeling method Generalized Additive Models for Location, Scale and Shape for forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC), FEV1/FVC, and maximal mid-expiratory flow (MMEF). Results: Data from 236,606 healthy non-smokers aged 4–90 years was collected from the MJ Health Check database. Spirometry equations for FEV1, FVC, MMEF, and FEV1/FVC were established, including the predicted values and lower limits of normal (LLNs) by sex. The predictive equations that were developed for the spirometric results elaborated the relationship between spirometry and age, and they eliminated the effects of height as a variable. Most previous predictive equations for Chinese spirometry were significantly overestimated (to be exact, with mean differences of 22.21% in FEV1 and 31.39% in FVC for males, along with differences of 26.93% in FEV1 and 35.76% in FVC for females) or underestimated (with mean differences of -5.81% in MMEF and -14.56% in FEV1/FVC for males, along with a difference of -14.54% in FEV1/FVC for females) the results of lung function measurements as found in this study. Through cross-validation, our equations were established as having good fit, and the means of the measured value and the estimated value were compared, with good results. Conclusions: Our study updates the spirometric reference equations for Chinese people of all ages and provides comprehensive values for both physical examination and clinical diagnosis.

Keywords: Chinese, GAMLSS model, reference values, spirometry

Procedia PDF Downloads 136
1471 Incidental Findings in the Maxillofacial Region Detected on Cone Beam Computed Tomography

Authors: Zeena Dcosta, Junaid Ahmed, Ceena Denny, Nandita Shenoy

Abstract:

In the field of dentistry, there are many conditions which warrant the requirement of three-dimensional imaging that can aid in diagnosis and therapeutic management. Cone beam computed tomography (CBCT) is considered highly accurate in producing a three-dimensional image of an object and provides a complete insight of various findings in the captured volume. But, most of the clinicians focus primarily on the teeth and jaws and numerous unanticipated clinically significant incidental findings may be missed out. Rapid integration of CBCT into the practice of dentistry has led to the detection of various incidental findings. However, the prevalence of these incidental findings is still unknown. Thus, the study aimed to discern the reason for referral and to identify incidental findings on the referred CBCT scans. Patient’s demographic data such as age and gender was noted. CBCT scans of multiple fields of views (FOV) were considered. The referral for CBCT scans was broadly classified into two major categories: diagnostic scan and treatment planning scan. Any finding on the CBCT volumes, other than the area of concern was recorded as incidental finding which was noted under airway, developmental, pathological, endodontics, TMJ, bone, soft tissue calcifications and others. Few of the incidental findings noted under airway were deviated nasal septum, nasal turbinate hypertrophy, mucosal thickening and pneumatization of sinus. Developmental incidental findings included dilaceration, impaction, pulp stone and gubernacular canal. Resorption of teeth and periapical pathologies were noted under pathological incidental findings. Root fracture along with over and under obturation was noted under endodontics. Incidental findings under TMJ were flattening, erosion and bifid condyle. Enostosis and exostosis were noted under bone lesions. Tonsillolth, sialolith and calcified styloid ligament were noted under soft tissue calcifications. Incidental findings under others included foreign body, fused C1- C2 vertebrae, nutrient canals, and pneumatocyst. Maxillofacial radiologists should be aware of possible incidental findings and should be vigilant about comprehensively evaluating the entire captured volume, which can help in early diagnosis of any potential pathologies that may go undetected. Interpretation of CBCT is truly an art and with the experience, we can unravel the secrets hidden in the grey shades of the radiographic image.

Keywords: cone beam computed tomography, incidental findings, maxillofacial region, radiologist

Procedia PDF Downloads 210
1470 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 99
1469 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods

Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar

Abstract:

Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.

Keywords: bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman

Procedia PDF Downloads 198
1468 Teachers' Design and Implementation of Collaborative Learning Tasks in Higher Education

Authors: Bing Xu, Kerry Lee, Jason M. Stephen

Abstract:

Collaborative learning (CL) has been regarded as a way to facilitate students to gain knowledge and improve social skills. In China, lecturers in higher education institutions have commonly adopted CL in their daily practice. However, such a strategy could not be effective when it is designed and applied in an inappropriate way. Previous research hardly focused on how CL was applied in Chinese universities. This present study aims to gain a deep understanding of how Chinese lecturers design and implement CL tasks. The researchers interviewed ten lecturers from different faculties in various universities in China and usedGroup Learning Activity Instructional Design (GLAID) framework to analyse the data. We found that not all lecturers pay enough attention to eight essential components (proposed by GLAID) when they designed CL tasks, especially the components of Structure and Guidance. Meanwhile, only a small part of lecturers made formative assessment to help students improve learning. We also discuss the strengths and limitations and CL design and further provide suggestions to the lecturers who intend to use CL in class. Research Objectives: The aims of the present research are threefold. We intend to 1) gain a deep understanding of how Chinese lecturers design and implement collaborative learning (CL) tasks, 2) find strengths and limitations of CL design in higher education, and 3) give suggestions about how to improve the design and implement. Research Methods: This research adopted qualitative methods. We applied the semi-structured interview method to interview ten Chinese lecturers about how they designed and implemented CL tasks in their courses. There were 9 questions in the interview protocol focusing on eight components of GLAID. Then, underpinning the GLAID framework, we utilized the coding reliability thematic analysis method to analyse the research data. The coding work was done by two PhD students whose research fields are CL, and the Cohen’s Kappa was 0.772 showing the inter-coder reliability was good. Contribution: Though CL has been commonly adopted in China, few studies have paid attention to the details about how lecturers designed and implemented CL tasks in practice. This research addressed such a gap and found not lecturers were aware of how to design CL and felt it difficult to structure the task and guide the students on collaboration, and further ensure student engagement in CL. In summary, this research advocates for teacher training; otherwise, students may not gain the expected learning outcomes.

Keywords: collaborative learning, higher education, task design, GLAID framework

Procedia PDF Downloads 100
1467 Efficient Human Motion Detection Feature Set by Using Local Phase Quantization Method

Authors: Arwa Alzughaibi

Abstract:

Human Motion detection is a challenging task due to a number of factors including variable appearance, posture and a wide range of illumination conditions and background. So, the first need of such a model is a reliable feature set that can discriminate between a human and a non-human form with a fair amount of confidence even under difficult conditions. By having richer representations, the classification task becomes easier and improved results can be achieved. The Aim of this paper is to investigate the reliable and accurate human motion detection models that are able to detect the human motions accurately under varying illumination levels and backgrounds. Different sets of features are tried and tested including Histogram of Oriented Gradients (HOG), Deformable Parts Model (DPM), Local Decorrelated Channel Feature (LDCF) and Aggregate Channel Feature (ACF). However, we propose an efficient and reliable human motion detection approach by combining Histogram of oriented gradients (HOG) and local phase quantization (LPQ) as the feature set, and implementing search pruning algorithm based on optical flow to reduce the number of false positive. Experimental results show the effectiveness of combining local phase quantization descriptor and the histogram of gradient to perform perfectly well for a large range of illumination conditions and backgrounds than the state-of-the-art human detectors. Areaunder th ROC Curve (AUC) of the proposed method achieved 0.781 for UCF dataset and 0.826 for CDW dataset which indicates that it performs comparably better than HOG, DPM, LDCF and ACF methods.

Keywords: human motion detection, histograms of oriented gradient, local phase quantization, local phase quantization

Procedia PDF Downloads 259
1466 Novel Point of Care Test for Rapid Diagnosis of COVID-19 Using Recombinant Nanobodies against SARS-CoV-2 Spike1 (S1) Protein

Authors: Manal Kamel, Sara Maher, Hanan El Baz, Faten Salah, Omar Sayyouh, Zeinab Demerdash

Abstract:

In the recent COVID 19 pandemic, experts of public health have emphasized testing, tracking infected people, and tracing their contacts as an effective strategy to reduce the spread of the virus. Development of rapid and sensitive diagnostic assays to replace reverse transcription polymerase chain reaction (RT-PCR) is mandatory..Our innovative test strip relying on the application of nanoparticles conjugated to recombinant nanobodies for SARS-COV-2 spike protein (S1) & angiotensin-converting enzyme 2 (that is responsible for the virus entry into host cells) for rapid detection of SARS-COV-2 spike protein (S1) in saliva or sputum specimens. Comparative tests with RT-PCR will be held to estimate the significant effect of using COVID 19 nanobodies for the first time in the development of lateral flow test strip. The SARS-CoV-2 S1 (3 ng of recombinant proteins) was detected by our developed LFIA in saliva specimen of COVID-19 Patients No cross-reaction was detected with Middle East respiratory syndrome coronavirus (MERS-CoV) or SARS- CoV antigens..Our developed system revealed 96 % sensitivity and 100% specificity for saliva samples compared to 89 % and 100% sensitivity and specificity for nasopharyngeal swabs. providing a reliable alternative for the painful and uncomfortable nasopharyngeal swab process and the complexes, time consuming PCR test. An increase in testing compliances to be expected.

Keywords: COVID 19, diagnosis, LFIA, nanobodies, ACE2

Procedia PDF Downloads 137
1465 Large Eddy Simulation with Energy-Conserving Schemes: Understanding Wind Farm Aerodynamics

Authors: Dhruv Mehta, Alexander van Zuijlen, Hester Bijl

Abstract:

Large Eddy Simulation (LES) numerically resolves the large energy-containing eddies of a turbulent flow, while modelling the small dissipative eddies. On a wind farm, these large scales carry the energy wind turbines extracts and are also responsible for transporting the turbines’ wakes, which may interact with downstream turbines and certainly with the atmospheric boundary layer (ABL). In this situation, it is important to conserve the energy that these wake’s carry and which could be altered artificially through numerical dissipation brought about by the schemes used for the spatial discretisation and temporal integration. Numerical dissipation has been reported to cause the premature recovery of turbine wakes, leading to an over prediction in the power produced by wind farms.An energy-conserving scheme is free from numerical dissipation and ensures that the energy of the wakes is increased or decreased only by the action of molecular viscosity or the action of wind turbines (body forces). The aim is to create an LES package with energy-conserving schemes to simulate wind turbine wakes correctly to gain insight into power-production, wake meandering etc. Such knowledge will be useful in designing more efficient wind farms with minimal wake interaction, which if unchecked could lead to major losses in energy production per unit area of the wind farm. For their research, the authors intend to use the Energy-Conserving Navier-Stokes code developed by the Energy Research Centre of the Netherlands.

Keywords: energy-conserving schemes, modelling turbulence, Large Eddy Simulation, atmospheric boundary layer

Procedia PDF Downloads 466
1464 Aerobic Exercise Increases Circulating Hematopoietic Stem Cells and Endothelial Progenitor Cells

Authors: Khaled A. shady, Fagr B. Bazeed, Nashwa K. Abousamra, Ihab H. Elberawe, Ashraf E. shaalan, Mohamed A. Sobh

Abstract:

Physical activity activates a variety of adult stem cells which might be released into the circulation or might be activated in their organ-resident state. A variety of stimuli such as metabolic, mechanical, and hormonal stimuli might by responsible for the mobilization. This study was done to know the changes in hematopoietic stem cells and endothelial progenitor in athletes in the 24 hours following 30 min of aerobic exercise. Methods: Ten healthy male's athlete's (age 20.7± 0.61 y) performed moderate running with 30 min at 80% of velocity of The IAT. Blood samples taken pre-, and immediately, 30 min, 2h, 6h and 24h post-exercise were analyzed for hematopoietic stem cells (HSCs ), endothelial progenitor cells (EPCs(, vascular endothelial growth factor (VEGF), nitric oxide (NO), lactic acid (LA), and white blood cells . HSCs and EPCs were quantified by flow cytometry. Results: After 30min of aerobic exercise significant increases in HSCs, EPC, VEGF, NO, LA and WBCs (p ˂ 0.05). This increase will be at different rates according to the timing of taking blood sample and was in the maximum rate of increase after 30 min of aerobic exercise. HSCs, EPC, NO and WBCs were in the maximum rate of increase 2h post exercise. In addition, VEGF was in the maximum rate of increase immediately post exercise and LA concentration not affected after exercise. Conclusion: These data suggest that HSCs and EPCs increased after aerobic exercise due to increase of VEGF which play an important role in mobilization of stem cells and promotes NO increase which contributes to increase EPCs.

Keywords: physical activity, hematopoietic stem cells, mobilization, athletes

Procedia PDF Downloads 119
1463 Corrosion Risk Assessment/Risk Based Inspection (RBI)

Authors: Lutfi Abosrra, Alseddeq Alabaoub, Nuri Elhaloudi

Abstract:

Corrosion processes in the Oil & Gas industry can lead to failures that are usually costly to repair, costly in terms of loss of contaminated product, in terms of environmental damage and possibly costly in terms of human safety. This article describes the results of the corrosion review and criticality assessment done at Mellitah Gas (SRU unit) for pressure equipment and piping system. The information gathered through the review was intended for developing a qualitative RBI study. The corrosion criticality assessment has been carried out by applying company procedures and industrial recommended practices such as API 571, API 580/581, ASME PCC 3, which provides a guideline for establishing corrosion integrity assessment. The corrosion review is intimately related to the probability of failure (POF). During the corrosion study, the process units are reviewed by following the applicable process flow diagrams (PFDs) in the presence of Mellitah’s personnel from process engineering, inspection, and corrosion/materials and reliability engineers. The expected corrosion damage mechanism (internal and external) was identified, and the corrosion rate was estimated for every piece of equipment and corrosion loop in the process units. A combination of both Consequence and Likelihood of failure was used for determining the corrosion risk. A qualitative consequence of failure (COF) for each individual item was assigned based on the characteristics of the fluid as per its flammability, toxicity, and pollution into three levels (High, Medium, and Low). A qualitative probability of failure (POF)was applied to evaluate the internal and external degradation mechanism, a high-level point-based (0 to 10) for the purpose of risk prioritizing in the range of Low, Medium, and High.

Keywords: corrosion, criticality assessment, RBI, POF, COF

Procedia PDF Downloads 82
1462 A Single Stage Rocket Using Solid Fuels in Conventional Propulsion Systems

Authors: John R Evans, Sook-Ying Ho, Rey Chin

Abstract:

This paper describes the research investigations orientated to the starting and propelling of a solid fuel rocket engine which operates as combined cycle propulsion system using three thrust pulses. The vehicle has been designed to minimise the cost of launching small number of Nano/Cube satellites into low earth orbits (LEO). A technology described in this paper is a ground-based launch propulsion system which starts the rocket vertical motion immediately causing air flow to enter the ramjet’s intake. Current technology has a ramjet operation predicted to be able to start high subsonic speed of 280 m/s using a liquid fuel ramjet (LFRJ). The combined cycle engine configuration is in many ways fundamentally different from the LFRJ. A much lower subsonic start speed is highly desirable since the use of a mortar to obtain the latter speed for rocket means a shorter launcher length can be utilized. This paper examines the means and has some performance calculations, including Computational Fluid Dynamics analysis of air-intake at suitable operational conditions, 3-DOF point mass trajectory analysis of multi-pulse propulsion system (where pulse ignition time and thrust magnitude can be controlled), etc. of getting a combined cycle rocket engine use in a single stage vehicle.

Keywords: combine cycle propulsion system, low earth orbit launch vehicle, computational fluid dynamics analysis, 3dof trajectory analysis

Procedia PDF Downloads 191
1461 Stem Cell Augmentation Therapy for Cardiovascular Risk in Ankylosing Spondylitis: STATIN-as Study

Authors: Ashit Syngle, Nidhi Garg, Pawan Krishan

Abstract:

Objective: Bone marrow derived stem cells, endothelial progenitor cells (EPCs), protect against atherosclerotic vascular damage. However, EPCs are depleted in AS and contribute to the enhanced cardiovascular risk. Statins have a protective effect in CAD and diabetes by enhancing the proliferation, migration and survival of EPCs. Therapeutic potential of augmenting EPCs to treat the heightened cardiovascular risk of AS has not yet been exploited. We aimed to investigate the effect of rosuvastatin on EPCs population and inflammation in AS. Methods: 30 AS patients were randomized to receive 6 months of treatment with rosuvastatin (10 mg/day, n=15) and placebo (n=15) as an adjunct to existing stable anti-rheumatic drugs. EPCs (CD34+/CD133+) were quantified by Flow Cytometry. Inflammatory measures (BASDAI, BASFI, CRP and ESR), pro-inflammatory cytokines (TNF-α, IL-6 and IL-1) and lipids were measured at baseline and after treatment. Results: At baseline, inflammatory measures and pro-inflammatory cytokines were elevated and EPCs depleted among both groups. EPCs increased significantly (p < 0.01) after treatment with rosuvastatin. At 6 months, BASDAI, BASFI, ESR, CRP, TNF-α, and IL-6 improved significantly in rosuvastatin group. Significant negative correlation was observed between EPCs and BASDAI, CRP and IL-6 after rosuvastatin treatment. Conclusion: First study to show that rosuvastatin augments EPCs population in AS. This defines a novel mechanism of rosuvastatin treatment in AS: the augmentation of EPCs with improvement in proinflammatory cytokines and inflammatory disease activity. The augmentation of EPCs by rosuvastatin may provide a novel strategy to prevent cardiovascular events in AS.

Keywords: ankylosing spondylitis, Endothelial Progenitor Cells, inflammation, pro-inflammatory cytokines, rosuvastatin

Procedia PDF Downloads 354
1460 Development of Technologies for the Treatment of Nutritional Problems in Primary Care

Authors: Marta Fernández Batalla, José María Santamaría García, Maria Lourdes Jiménez Rodríguez, Roberto Barchino Plata, Adriana Cercas Duque, Enrique Monsalvo San Macario

Abstract:

Background: Primary Care Nursing is taking more autonomy in clinical decisions. One of the most frequent therapies to solve is related to the problems of maintaining a sufficient supply of food. Nursing diagnoses related to food are addressed by the nurse-family and community as the first responsible. Objectives and interventions are set according to each patient. To improve the goal setting and the treatment of these care problems, a technological tool is developed to help nurses. Objective: To evaluate the computational tool developed to support the clinical decision in feeding problems. Material and methods: A cross-sectional descriptive study was carried out at the Meco Health Center, Madrid, Spain. The study population consisted of four specialist nurses in primary care. These nurses tested the tool on 30 people with ‘need for nutritional therapy’. Subsequently, the usability of the tool and the satisfaction of the professional were sought. Results: A simple and convenient computational tool is designed for use. It has 3 main entrance fields: age, size, sex. The tool returns the following information: BMI (Body Mass Index) and caloric consumed by the person. The next step is the caloric calculation depending on the activity. It is possible to propose a goal of BMI or weight to achieve. With this, the amount of calories to be consumed is proposed. After using the tool, it was determined that the tool calculated the BMI and calories correctly (in 100% of clinical cases). satisfaction on nutritional assessment was ‘satisfactory’ or ‘very satisfactory’, linked to the speed of operations. As a point of improvement, the options of ‘stress factor’ linked to weekly physical activity. Conclusion: Based on the results, it is clear that the computational tools of decision support are useful in the clinic. Nurses are not only consumers of computational tools, but can develop their own tools. These technological solutions improve the effectiveness of nutrition assessment and intervention. We are currently working on improvements such as the calculation of protein percentages as a function of protein percentages as a function of stress parameters.

Keywords: feeding behavior health, nutrition therapy, primary care nursing, technology assessment

Procedia PDF Downloads 228
1459 Molecular Engineering of High-Performance Nanofiltration Membranes from Intrinsically Microporous Poly (Ether-Ether-Ketone)

Authors: Mahmoud A. Abdulhamid

Abstract:

Poly(ether-ether-ketone) (PEEK) has received increased attention due to its outstanding performance in different membrane applications including gas and liquid separation. However, it suffers from a semi-crystalline morphology, bad solubility and low porosity. To fabricate membranes from PEEK, the usage of harsh acid such as sulfuric acid is essential, regardless its hazardous properties. In this work, we report the molecular design of poly(ether-ether-ketones) (iPEEKs) with intrinsic porosity character, by incorporating kinked units into PEEK backbone such as spirobisindane, Tröger's base, and triptycene. The porous polymers were used to fabricate stable membranes for organic solvent nanofiltration application. To better understand the mechanism, we conducted molecular dynamics simulations to evaluate the possible interactions between the polymers and the solvents. Notable enhancement in separation performance was observed confirming the importance of molecular engineering of high-performance polymers. The iPEEKs demonstrated good solubility in polar aprotic solvents, a high surface area of 205–250 m² g⁻¹, and excellent thermal stability. Mechanically flexible nanofiltration membranes were prepared from N-methyl-2-pyrrolidone dope solution at iPEEK concentrations of 19–35 wt%. The molecular weight cutoff of the membranes was fine-tuned in the range of 450–845 g mol⁻¹ displaying 2–6 fold higher permeance (3.57–11.09 L m⁻² h⁻¹ bar⁻¹) than previous reports. The long-term stabilities were demonstrated by a 7 day continuous cross-flow filtration.

Keywords: molecular engineering, polymer synthesis, membrane fabrication, liquid separation

Procedia PDF Downloads 96
1458 Development and Validation of High-Performance Liquid Chromatography Method for the Determination and Pharmacokinetic Study of Linagliptin in Rat Plasma

Authors: Hoda Mahgoub, Abeer Hanafy

Abstract:

Linagliptin (LNG) belongs to dipeptidyl-peptidase-4 (DPP-4) inhibitor class. DPP-4 inhibitors represent a new therapeutic approach for the treatment of type 2 diabetes in adults. The aim of this work was to develop and validate an accurate and reproducible HPLC method for the determination of LNG with high sensitivity in rat plasma. The method involved separation of both LNG and pindolol (internal standard) at ambient temperature on a Zorbax Eclipse XDB C18 column and a mobile phase composed of 75% methanol: 25% formic acid 0.1% pH 4.1 at a flow rate of 1.0 mL.min-1. UV detection was performed at 254nm. The method was validated in compliance with ICH guidelines and found to be linear in the range of 5–1000ng.mL-1. The limit of quantification (LOQ) was found to be 5ng.mL-1 based on 100µL of plasma. The variations for intra- and inter-assay precision were less than 10%, and the accuracy values were ranged between 93.3% and 102.5%. The extraction recovery (R%) was more than 83%. The method involved a single extraction step of a very small plasma volume (100µL). The assay was successfully applied to an in-vivo pharmacokinetic study of LNG in rats that were administered a single oral dose of 10mg.kg-1 LNG. The maximum concentration (Cmax) was found to be 927.5 ± 23.9ng.mL-1. The area under the plasma concentration-time curve (AUC0-72) was 18285.02 ± 605.76h.ng.mL-1. In conclusion, the good accuracy and low LOQ of the bioanalytical HPLC method were suitable for monitoring the full pharmacokinetic profile of LNG in rats. The main advantages of the method were the sensitivity, small sample volume, single-step extraction procedure and the short time of analysis.

Keywords: HPLC, linagliptin, pharmacokinetic study, rat plasma

Procedia PDF Downloads 241
1457 A Three-Dimensional (3D) Numerical Study of Roofs Shape Impact on Air Quality in Urban Street Canyons with Tree Planting

Authors: Bouabdellah Abed, Mohamed Bouzit, Lakhdar Bouarbi

Abstract:

The objective of this study is to investigate numerically the effect of roof shaped on wind flow and pollutant dispersion in a street canyon with one row of trees of pore volume, Pvol = 96%. A three-dimensional computational fluid dynamics (CFD) model for evaluating air flow and pollutant dispersion within an urban street canyon using Reynolds-averaged Navier–Stokes (RANS) equations and the k-Epsilon EARSM turbulence model as close of the equation system. The numerical model is performed with ANSYS-CFX code. Vehicle emissions were simulated as double line sources along the street. The numerical model was validated against the wind tunnel experiment. Having established this, the wind flow and pollutant dispersion in urban street canyons of six roof shapes are simulated. The numerical simulation agrees reasonably with the wind tunnel data. The results obtained in this work, indicate that the flow in 3D domain is more complicated, this complexity is increased with presence of tree and variability of the roof shapes. The results also indicated that the largest pollutant concentration level for two walls (leeward and windward wall) is observed with the upwind wedge-shaped roof. But the smallest pollutant concentration level is observed with the dome roof-shaped. The results also indicated that the corners eddies provide additional ventilation and lead to lower traffic pollutant concentrations at the street canyon ends.

Keywords: street canyon, pollutant dispersion, trees, building configuration, numerical simulation, k-Epsilon EARSM

Procedia PDF Downloads 366
1456 Performance Evaluation of a Small Microturbine Cogeneration Functional Model

Authors: Jeni A. Popescu, Sorin G. Tomescu, Valeriu A. Vilag

Abstract:

The paper focuses on the potential methods of increasing the performance of a microturbine by combining additional elements available for utilization in a cogeneration plant. The activity is carried out within the framework of a project aiming to develop, manufacture and test a microturbine functional model with high potential in energetic industry utilization. The main goal of the analysis is to determine the parameters of the fluid flow passing through each section of the turbine, based on limited data available in literature for the focus output power range or provided by experimental studies, starting from a reference cycle, and considering different cycle options, including simple, intercooled and recuperated options, in order to optimize a small cogeneration plant operation. The studied configurations operate under the same initial thermodynamic conditions and are based on a series of assumptions, in terms of individual performance of the components, pressure/velocity losses, compression ratios, and efficiencies. The thermodynamic analysis evaluates the expected performance of the microturbine cycle, while providing a series of input data and limitations to be included in the development of the experimental plan. To simplify the calculations and to allow a clear estimation of the effect of heat transfer between fluids, the working fluid for all the thermodynamic evolutions is, initially, air, the combustion being modelled by simple heat addition to the system. The theoretical results, along with preliminary experimental results are presented, aiming for a correlation in terms of microturbine performance.

Keywords: cogeneration, microturbine, performance, thermodynamic analysis

Procedia PDF Downloads 171
1455 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 409
1454 The Role of the Linguistic Mediator in Relation to Culturally Oriented Crimes

Authors: Andreas Aceranti, Simonetta Vernocchi, Elisabetta Aldrovandi, Marco Colorato, Carolina Ascrizzi

Abstract:

Nowadays, especially due to an increasing flow of migration and uncontrolled globalisation, linguistic, cultural and religious differences can be a major obstacle for people belonging to different ethnic groups. Each group has its own traditional background, which, in addition to its positive aspects, also includes extremely unpleasant and dramatic situations: culture-related crimes. We analysed several cases belonging to this category of crime which is becoming more and more present in Europe, creating not only a strong social rift dictated by the misunderstanding between migrants and host populations but also by the isolation and ghettoisation of subjects classified as 'different'. Such social rejection, in fact, represents a great source of stress and frustration for those who seek to be part of the community and can generate phenomena of rebellion that result in violent acts. Similar situations must be addressed by the figure of the cultural-linguistic mediator who, thanks to his or her multidisciplinary knowledge, assumes the role of a 'bridge', thus helping the process of awareness and understanding within the social group through the use of various tools, including awareness-raising campaigns and interventions in both the school and social-health sectors. By analysing how the notions of culture and offense have evolved throughout history until they have merged into a single principle and, secondly, how the figure of the language mediator represents a fundamental role in the resolution of conflicts related to cultural diversity has helped us define the basis for new protocols in dealing with such crimes. Especially we have to define the directions of further investigations that we will carry out in the next months.

Keywords: cultural crimes, hatred crimes, immigration, cultural mediation

Procedia PDF Downloads 79
1453 Simulation of Cure Kinetics and Process-Induced Stresses in Carbon Fibre Composite Laminate Manufactured by a Liquid Composite Molding Technique

Authors: Jayaraman Muniyappan, Bachchan Kr Mishra, Gautam Salkar, Swetha Manian Sridhar

Abstract:

Vacuum Assisted Resin Transfer Molding (VARTM), a cost effective method of Liquid Composite Molding (LCM), is a single step process where the resin, at atmospheric pressure, is infused through a preform that is maintained under vacuum. This hydrodynamic pressure gradient is responsible for the flow of resin through the dry fabric preform. The current study has a slight variation to traditional VARTM, wherein, the resin infuses through the fabric placed on a heated mold to reduce its viscosity. The saturated preform is subjected to a cure cycle where the resin hardens as it undergoes curing. During this cycle, an uneven temperature distribution through the thickness of the composite and excess exothermic heat released due to different cure rates result in non-uniform curing. Additionally, there is a difference in thermal expansion coefficient between fiber and resin in a given plane and between adjacent plies. All these effects coupled with orthotropic coefficient of thermal expansion of the composite give rise to process-induced stresses in the laminate. Such stresses lead to part deformation when the laminate tries to relieve them as the part is released off the mold. The current study looks at simulating resin infusion, cure kinetics and the structural response of composite laminate subject to process-induced stresses.

Keywords: cure kinetics, process-induced stresses, thermal expansion coefficient, vacuum assisted resin transfer molding

Procedia PDF Downloads 240
1452 Temporal and Spatial Distribution Prediction of Patinopecten yessoensis Larvae in Northern China Yellow Sea

Authors: RuiJin Zhang, HengJiang Cai, JinSong Gui

Abstract:

It takes Patinopecten yessoensis larvae more than 20 days from spawning to settlement. Due to the natural environmental factors such as current, Patinopecten yessoensis larvae are transported to a distance more than hundreds of kilometers, leading to a high instability of their spatial and temporal distribution and great difficulties in the natural spat collection. Therefore predicting the distribution is of great significance to improve the operating efficiency of the collecting. Hydrodynamic model of Northern China Yellow Sea was established and the motions equations of physical oceanography and verified by the tidal harmonic constants and the measured data velocities of Dalian Bay. According to the passivity drift characteristics of the larvae, combined with the hydrodynamic model and the particle tracking model, the spatial and temporal distribution prediction model was established and the spatial and temporal distribution of the larvae under the influence of flow and wind were simulated. It can be concluded from the model results: ocean currents have greatest impacts on the passive drift path and diffusion of Patinopecten yessoensis larvae; the impact of wind is also important, which changed the direction and speed of the drift. Patinopecten yessoensis larvae were generated in the sea along Zhangzi Island and Guanglu-Dachangshan Island, but after two months, with the impact of wind and currents, the larvae appeared in the west of Dalian and the southern of Lvshun, and even in Bohai Bay. The model results are consistent with the relevant literature on qualitative analysis, and this conclusion explains where the larvae come from in the perspective of numerical simulation.

Keywords: numerical simulation, Patinopecten yessoensis larvae, predicting model, spatial and temporal distribution

Procedia PDF Downloads 305