Search results for: experimental correlation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10798

Search results for: experimental correlation

1768 Application of Response Surface Methodology to Assess the Impact of Aqueous and Particulate Phosphorous on Diazotrophic and Non-Diazotrophic Cyanobacteria Associated with Harmful Algal Blooms

Authors: Elizabeth Crafton, Donald Ott, Teresa Cutright

Abstract:

Harmful algal blooms (HABs), more notably cyanobacteria-dominated HABs, compromise water quality, jeopardize access to drinking water and are a risk to public health and safety. HABs are representative of ecosystem imbalance largely caused by environmental changes, such as eutrophication, that are associated with the globally expanding human population. Cyanobacteria-dominated HABs are anticipated to increase in frequency, magnitude, and are predicted to plague a larger geographical area as a result of climate change. The weather pattern is important as storm-driven, pulse-input of nutrients have been correlated to cyanobacteria-dominated HABs. The mobilization of aqueous and particulate nutrients and the response of the phytoplankton community is an important relationship in this complex phenomenon. This relationship is most apparent in high-impact areas of adequate sunlight, > 20ᵒC, excessive nutrients and quiescent water that corresponds to ideal growth of HABs. Typically the impact of particulate phosphorus is dismissed as an insignificant contribution; which is true for areas that are not considered high-impact. The objective of this study was to assess the impact of a simulated storm-driven, pulse-input of reactive phosphorus and the response of three different cyanobacteria assemblages (~5,000 cells/mL). The aqueous and particulate sources of phosphorus and changes in HAB were tracked weekly for 4 weeks. The first cyanobacteria composition consisted of Planktothrix sp., Microcystis sp., Aphanizomenon sp., and Anabaena sp., with 70% of the total population being non-diazotrophic and 30% being diazotrophic. The second was comprised of Anabaena sp., Planktothrix sp., and Microcystis sp., with 87% diazotrophic and 13% non-diazotrophic. The third composition has yet to be determined as these experiments are ongoing. Preliminary results suggest that both aqueous and particulate sources are contributors of total reactive phosphorus in high-impact areas. The results further highlight shifts in the cyanobacteria assemblage after the simulated pulse-input. In the controls, the reactors dosed with aqueous reactive phosphorus maintained a constant concentration for the duration of the experiment; whereas, the reactors that were dosed with aqueous reactive phosphorus and contained soil decreased from 1.73 mg/L to 0.25 mg/L of reactive phosphorus from time zero to 7 days; this was higher than the blank (0.11 mg/L). Suggesting a binding of aqueous reactive phosphorus to sediment, which is further supported by the positive correlation observed between total reactive phosphorus concentration and turbidity. The experiments are nearly completed and a full statistical analysis will be completed of the results prior to the conference.

Keywords: Anabaena, cyanobacteria, harmful algal blooms, Microcystis, phosphorous, response surface methodology

Procedia PDF Downloads 162
1767 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials

Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié

Abstract:

Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.

Keywords: bio-based materials, mould growth, numerical prediction, reliability approach

Procedia PDF Downloads 44
1766 Effect of Naphtha in Addition to a Cycle Steam Stimulation Process Reducing the Heavy Oil Viscosity Using a Two-Level Factorial Design

Authors: Nora A. Guerrero, Adan Leon, María I. Sandoval, Romel Perez, Samuel Munoz

Abstract:

The addition of solvents in cyclic steam stimulation is a technique that has shown an impact on the improved recovery of heavy oils. In this technique, it is possible to reduce the steam/oil ratio in the last stages of the process, at which time this ratio increases significantly. The mobility of improved crude oil increases due to the structural changes of its components, which at the same time reflected in the decrease in density and viscosity. In the present work, the effect of the variables such as temperature, time, and weight percentage of naphtha was evaluated, using a factorial design of experiments 23. From the results of analysis of variance (ANOVA) and Pareto diagram, it was possible to identify the effect on viscosity reduction. The experimental representation of the crude-vapor-naphtha interaction was carried out in a batch reactor on a Colombian heavy oil of 12.8° API and 3500 cP. The conditions of temperature, reaction time, and percentage of naphtha were 270-300 °C, 48-66 hours, and 3-9% by weight, respectively. The results showed a decrease in density with values in the range of 0.9542 to 0.9414 g/cm³, while the viscosity decrease was in the order of 55 to 70%. On the other hand, simulated distillation results, according to ASTM 7169, revealed significant conversions of the 315°C+ fraction. From the spectroscopic techniques of nuclear magnetic resonance NMR, infrared FTIR and UV-VIS visible ultraviolet, it was determined that the increase in the performance of the light fractions in the improved crude is due to the breakdown of alkyl chains. The methodology for cyclic steam injection with naphtha and laboratory-scale characterization can be considered as a practical tool in improved recovery processes.

Keywords: viscosity reduction, cyclic steam stimulation, factorial design, naphtha

Procedia PDF Downloads 172
1765 A Comparison between Five Indices of Overweight and Their Association with Myocardial Infarction and Death, 28-Year Follow-Up of 1000 Middle-Aged Swedish Employed Men

Authors: Lennart Dimberg, Lala Joulha Ian

Abstract:

Introduction: Overweight (BMI 25-30) and obesity (BMI 30+) have consistently been associated with cardiovascular (CV) risk and death since the Framingham heart study in 1948, and BMI was included in the original Framingham risk score (FRS). Background: Myocardial infarction (MI) poses a serious threat to the patient's life. In addition to BMI, several other indices of overweight have been presented and argued to replace FRS as more relevant measures of CV risk. These indices include waist circumference (WC), waist/hip ratio (WHR), sagittal abdominal diameter (SAD), and sagittal abdominal diameter to height (SADHtR). Specific research question: The research question of this study is to evaluate the interrelationship between the various body measurements, BMI, WC, WHR, SAD, and SADHtR, and which measurement is strongly associated with MI and death. Methods: In 1993, 1,000 middle-aged Caucasian, randomly selected working men of the Swedish Volvo-Renault cohort were surveyed at a nurse-led health examination with a questionnaire, EKG, laboratory tests, blood pressure, height, weight, waist, and sagittal abdominal diameter measurements. Outcome data of myocardial infarction over 28 years come from Swedeheart (the Swedish national myocardial infarction registry) and the Swedish death registry. The Aalen-Johansen and Kaplan–Meier methods were used to estimate the cumulative incidences of MI and death. Multiple logistic regression analyses were conducted to compare BMI with the other four body measurements. The risk for the various measures of obesity was calculated with outcomes of accumulated first-time myocardial infarction and death as odds ratios (OR) in quartiles. The ORs between the 4th and the 1st quartile of each measure were calculated to estimate the association between the body measurement variables and the probability of cumulative incidences of myocardial infarction (MI) over time. Double-sided P values below 0.05 will be considered statistically significant. Unadjusted odds ratios were calculated for obesity indicators, MI, and death. Adjustments for age, diabetes, SBP, and the ratio of total cholesterol/HDL-C and blue/white collar status were performed. Results: Out of 1000 people, 959 subjects had full information about the five different body measurements. Of those, 90 participants had a first MI, and 194 persons died. The study showed that there was a high and significant correlation between the five different body measurements, and they were all associated with CVD risk factors. All body measurements were significantly associated with MI, with the highest (OR=3.6) seen for SADHtR and WC. After adjustment, all but SADHtR remained significant with weaker ORs. As for all-cause mortality, WHR (OR=1.7), SAD (OR=1.9), and SADHtR (OR=1.6) were significantly associated, but not WC and BMI. However, after adjustment, only WHR and SAD were significantly associated with death, but with attenuated ORs.

Keywords: BMI, death, epidemiology, myocardial infarction, risk factor, sagittal abdominal diameter, sagittal abdominal diameter to height, waist circumference, waist-hip ratio

Procedia PDF Downloads 90
1764 Design of Hybrid Auxetic Metamaterials for Enhanced Energy Absorption under Compression

Authors: Ercan Karadogan, Fatih Usta

Abstract:

Auxetic materials have a negative Poisson’s ratio (NPR), which is not often found in nature. They are metamaterials that have potential applications in many engineering fields. Mechanical metamaterials are synthetically designed structures with unusual mechanical properties. These mechanical properties are dependent on the properties of the matrix structure. They have the following special characteristics, i.e., improved shear modulus, increased energy absorption, and intensive fracture toughness. Non-auxetic materials compress transversely when they are stretched. The system naturally is inclined to keep its density constant. The transversal compression increases the density to balance the loss in the longitudinal direction. This study proposes to improve the crushing performance of hybrid auxetic materials. The re-entrant honeycomb structure has been combined with a star honeycomb, an S-shaped unit cell, a double arrowhead, and a structurally hexagonal re-entrant honeycomb by 9 X 9 cells, i.e., the number of cells is 9 in the lateral direction and 9 in the vertical direction. The Finite Element (FE) and experimental methods have been used to determine the compression behavior of the developed hybrid auxetic structures. The FE models have been developed by using Abaqus software. The specimens made of polymer plastic materials have been 3D printed and subjected to compression loading. The results are compared in terms of specific energy absorption and strength. This paper describes the quasi-static crushing behavior of two types of hybrid lattice structures (auxetic + auxetic and auxetic + non-auxetic). The results show that the developed hybrid structures can be useful to control collapse mechanisms and present larger energy absorption compared to conventional re-entrant auxetic structures.

Keywords: auxetic materials, compressive behavior, metamaterials, negative Poisson’s ratio

Procedia PDF Downloads 94
1763 Optimization of Quercus cerris Bark Liquefaction

Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves

Abstract:

The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.

Keywords: liquefaction, Quercus cerris, polyalcohol liquefaction, temperature

Procedia PDF Downloads 330
1762 Quantifying Wave Attenuation over an Eroding Marsh through Numerical Modeling

Authors: Donald G. Danmeier, Gian Marco Pizzo, Matthew Brennan

Abstract:

Although wetlands have been proposed as a green alternative to manage coastal flood hazards because of their capacity to adapt to sea level rise and provision of multiple ecological and social co-benefits, they are often overlooked due to challenges in quantifying the uncertainty and naturally, variability of these systems. This objective of this study was to quantify wave attenuation provided by a natural marsh surrounding a large oil refinery along the US Gulf Coast that has experienced steady erosion along the shoreward edge. The vegetation module of the SWAN was activated and coupled with a hydrodynamic model (DELFT3D) to capture two-way interactions between the changing water level and wavefield over the course of a storm event. Since the marsh response to relative sea level rise is difficult to predict, a range of future marsh morphologies is explored. Numerical results were examined to determine the amount of wave attenuation as a function of marsh extent and the relative contributions from white-capping, depth-limited wave breaking, bottom friction, and flexing of vegetation. In addition to the coupled DELFT3D-SWAN modeling of a storm event, an uncoupled SWAN-VEG model was applied to a simplified bathymetry to explore a larger experimental design space. The wave modeling revealed that the rate of wave attenuation reduces for higher surge but was still significant over a wide range of water levels and outboard wave heights. The results also provide insights to the minimum marsh extent required to fully realize the potential wave attenuation so the changing coastal hazards can be managed.

Keywords: green infrastructure, wave attenuation, wave modeling, wetland

Procedia PDF Downloads 130
1761 Stabilization of Lateritic Soil Sample from Ijoko with Cement Kiln Dust and Lime

Authors: Akinbuluma Ayodeji Theophilus, Adewale Olutaiwo

Abstract:

When building roads and paved surfaces, a strong foundation is always essential. A durable material that can withstand years of traffic while staying trustworthy must be used to build the foundation. A frequent problem in the construction of roads and pavements is the lack of high-quality, long-lasting materials for the pavement structure (base, subbase, and subgrade). Hence, this study examined the stabilization of lateritic soil samples from Ijoko with cement kiln dust and lime. The study adopted the experimental design. Laboratory tests were conducted on classification, swelling potential, compaction, California bearing ratio (CBR), and unconfined compressive tests, among others, were conducted on the laterite sample treated with cement kiln dust (CKD) and lime in incremental order of 2% up to 10% of dry weight soft soil sample. The results of the test showed that the studied soil could be classified as an A-7-6 and CL soil using the American Association of State Highway and transport officials (AASHTO) and the unified soil classification system (USCS), respectively. The plasticity (PI) of the studied soil reduced from 30.5% to 29.9% at the application of CKD. The maximum dry density on the application of CKD reduced from 1.9.7 mg/m3 to 1.86mg/m3, and lime application yielded a reduction from 1.97mg/m3 to 1.88.mg/m3. The swell potential on CKD application was reduced from 0.05 to 0.039%. The study concluded that soil stabilizations are effective and economic way of improving road pavement for engineering benefit. The degree of effectiveness of stabilization in pavement construction was found to depend on the type of soil to be stabilized. The study therefore recommended that stabilized soil mixtures should be used to subbase material for flexible pavement since is a suitable.

Keywords: lateritic soils, sand, cement, stabilization, road pavement

Procedia PDF Downloads 86
1760 The Surgical Trainee Perception of the Operating Room Educational Environment

Authors: Neal Rupani

Abstract:

Background: A surgical trainee has limited learning opportunities in the operating room in order to gain an ever-increasing standard of surgical skill, competency, and proficiency. These opportunities continue to decline due to numerous factors such as the European Working Time Directive and increasing requirement for service provision. It is therefore imperative to obtain the highest educational value from each educational opportunity. A measure that has yet to be validated in England on surgical trainees called the Operating Room Educational Environment Measure (OREEM) has been developed to identify and evaluate each component of the educational environment with a view to steer future change in optimising educational events in theatre. Aims: The aims of the study are to assess the reliability of the OREEM within England and to evaluate the surgical trainee’s objective perspective of the current operating room educational environment within one region within England. Methods: Using a quantitative study approach, data was collected over one month from surgical trainees within Health Education Thames Valley (Oxford) using an online questionnaire consisting of demographic data, the OREEM, a global satisfaction score. Results: 140 surgical trainees were invited to the study, with an online response of 54 participants (response rate = 38.6%). The OREEM was shown to have good internal consistency (α = 0.906, variables = 40) and unidimensionality, along with all four of its subgroups. The mean OREEM score was 79.16%. The areas highlighted for improvement predominantly focused on improving learning opportunities (average subscale score = 72.9%) and conducting pre- and post-operative teaching (average score = 70.4%). The trainee perception is most satisfactory for the level of supervision and workload (average subscale score = 82.87%). There was no differences found between gender (U = 191.5, p = 0.535) or type of hospital (U = 258.0, p = 0.099), but the learning environment was favoured towards senior trainees (U = 223.5, p = 0.017). There was strong correlation between OREEM and the global satisfaction score (r = 0.755, p<0.001). Conclusions: The OREEM was shown to be reliable in measuring the educational environment in the operating room. This can be used to identify potentially modifiable components for improvement and as an audit tool to ensure high standards are being met. The current perception of the education environment in Health Education Thames Valley is satisfactory, and modifiable internal and external factors such as reducing service provision requirements, empowering trainees to plan lists, creating a team-working ethic between all personnel, and using tools that maximise learning from each operation have been identified to improve learning in the future. There is a favourable attitude to use of such improvement tools, especially for those currently dissatisfied.

Keywords: education environment, surgery, post-graduate education, OREEM

Procedia PDF Downloads 181
1759 Relationship between Different Heart Rate Control Levels and Risk of Heart Failure Rehospitalization in Patients with Persistent Atrial Fibrillation: A Retrospective Cohort Study

Authors: Yongrong Liu, Xin Tang

Abstract:

Background: Persistent atrial fibrillation is a common arrhythmia closely related to heart failure. Heart rate control is an essential strategy for treating persistent atrial fibrillation. Still, the understanding of the relationship between different heart rate control levels and the risk of heart failure rehospitalization is limited. Objective: The objective of the study is to determine the relationship between different levels of heart rate control in patients with persistent atrial fibrillation and the risk of readmission for heart failure. Methods: We conducted a retrospective dual-centre cohort study, collecting data from patients with persistent atrial fibrillation who received outpatient treatment at two tertiary hospitals in central and western China from March 2019 to March 2020. The collected data included age, gender, body mass index (BMI), medical history, and hospitalization frequency due to heart failure. Patients were divided into three groups based on their heart rate control levels: Group I with a resting heart rate of less than 80 beats per minute, Group II with a resting heart rate between 80 and 100 beats per minute, and Group III with a resting heart rate greater than 100 beats per minute. The readmission rates due to heart failure within one year after discharge were statistically analyzed using propensity score matching in a 1:1 ratio. Differences in readmission rates among the different groups were compared using one-way ANOVA. The impact of varying levels of heart rate control on the risk of readmission for heart failure was assessed using the Cox proportional hazards model. Binary logistic regression analysis was employed to control for potential confounding factors. Results: We enrolled a total of 1136 patients with persistent atrial fibrillation. The results of the one-way ANOVA showed that there were differences in readmission rates among groups exposed to different levels of heart rate control. The readmission rates due to heart failure for each group were as follows: Group I (n=432): 31 (7.17%); Group II (n=387): 11.11%; Group III (n=317): 90 (28.50%) (F=54.3, P<0.001). After performing 1:1 propensity score matching for the different groups, 223 pairs were obtained. Analysis using the Cox proportional hazards model showed that compared to Group I, the risk of readmission for Group II was 1.372 (95% CI: 1.125-1.682, P<0.001), and for Group III was 2.053 (95% CI: 1.006-5.437, P<0.001). Furthermore, binary logistic regression analysis, including variables such as digoxin, hypertension, smoking, coronary heart disease, and chronic obstructive pulmonary disease as independent variables, revealed that coronary heart disease and COPD also had a significant impact on readmission due to heart failure (p<0.001). Conclusion: The correlation between the heart rate control level of patients with persistent atrial fibrillation and the risk of heart failure rehospitalization is positive. Reasonable heart rate control may significantly reduce the risk of heart failure rehospitalization.

Keywords: heart rate control levels, heart failure rehospitalization, persistent atrial fibrillation, retrospective cohort study

Procedia PDF Downloads 70
1758 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates

Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau

Abstract:

There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.

Keywords: duration estimates, long durations, passage of time judgements, task demands

Procedia PDF Downloads 128
1757 An In-Depth Experimental Study of Wax Deposition in Pipelines

Authors: Arias M. L., D’Adamo J., Novosad M. N., Raffo P. A., Burbridge H. P., Artana G.

Abstract:

Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevents wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of Y-TEC's flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 mts long equipped with a solid detector system, online microscope to visualize crystals, temperature and pressure sensors along the loop pipe. A baseline test was performed with diesel with no paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin added to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods. Finally, we scrutinized the effect of adding a chemical inhibitor to the working fluid on the dynamics of the process of wax deposition in the loop.

Keywords: paraffin desposition, flow assurance, chemical inhibitors, flow loop

Procedia PDF Downloads 103
1756 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform

Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail

Abstract:

The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.

Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring

Procedia PDF Downloads 75
1755 Research of Actuators of Common Rail Injection Systems with the Use of LabVIEW on a Specially Designed Test Bench

Authors: G. Baranski, A. Majczak, M. Wendeker

Abstract:

Currently, the most commonly used solution to provide fuel to the diesel engines is the Common Rail system. Compared to previous designs, as a due to relatively simple construction and electronic control systems, these systems allow achieving favourable engine operation parameters with particular emphasis on low emission of toxic compounds into the atmosphere. In this system, the amount of injected fuel dose is strictly dependent on the course of parameters of the electrical impulse sent by the power amplifier power supply system injector from the engine controller. The article presents the construction of a laboratory test bench to examine the course of the injection process and the expense in storage injection systems. The test bench enables testing of injection systems with electromagnetically controlled injectors with the use of scientific engineering tools. The developed system is based on LabView software and CompactRIO family controller using FPGA systems and a real time microcontroller. The results of experimental research on electromagnetic injectors of common rail system, controlled by a dedicated National Instruments card, confirm the effectiveness of the presented approach. The results of the research described in the article present the influence of basic parameters of the electric impulse opening the electromagnetic injector on the value of the injected fuel dose. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A.’ and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: fuel injector, combustion engine, fuel pressure, compression ignition engine, power supply system, controller, LabVIEW

Procedia PDF Downloads 128
1754 Nanomechanical Characterization of Healthy and Tumor Lung Tissues at Cell and Extracellular Matrix Level

Authors: Valeria Panzetta, Ida Musella, Sabato Fusco, Paolo Antonio Netti

Abstract:

The study of the biophysics of living cells drew attention to the pivotal role of the cytoskeleton in many cell functions, such as mechanics, adhesion, proliferation, migration, differentiation and neoplastic transformation. In particular, during the complex process of malignant transformation and invasion cell cytoskeleton devolves from a rigid and organized structure to a more compliant state, which confers to the cancer cells a great ability to migrate and adapt to the extracellular environment. In order to better understand the malignant transformation process from a mechanical point of view, it is necessary to evaluate the direct crosstalk between the cells and their surrounding extracellular matrix (ECM) in a context which is close to in vivo conditions. In this study, human biopsy tissues of lung adenocarcinoma were analyzed in order to define their mechanical phenotype at cell and ECM level, by using particle tracking microrheology (PTM) technique. Polystyrene beads (500 nm) were introduced into the sample slice. The motion of beads was obtained by tracking their displacements across cell cytoskeleton and ECM structures and mean squared displacements (MSDs) were calculated from bead trajectories. It has been already demonstrated that the amplitude of MSD is inversely related to the mechanical properties of intracellular and extracellular microenvironment. For this reason, MSDs of particles introduced in cytoplasm and ECM of healthy and tumor tissues were compared. PTM analyses showed that cancerous transformation compromises mechanical integrity of cells and extracellular matrix. In particular, the MSD amplitudes in cells of adenocarcinoma were greater as compared to cells of normal tissues. The increased motion is probably associated to a less structured cytoskeleton and consequently to an increase of deformability of cells. Further, cancer transformation is also accompanied by extracellular matrix stiffening, as confirmed by the decrease of MSDs of matrix in tumor tissue, a process that promotes tumor proliferation and invasiveness, by activating typical oncogenic signaling pathways. In addition, a clear correlation between MSDs of cells and tumor grade was found. MSDs increase when tumor grade passes from 2 to 3, indicating that cells undergo to a trans-differentiation process during tumor progression. ECM stiffening is not dependent on tumor grade, but the tumor stage resulted to be strictly correlated with both cells and ECM mechanical properties. In fact, a greater stage is assigned to tumor spread to regional lymph nodes and characterized by an up-regulation of different ECM proteins, such as collagen I fibers. These results indicate that PTM can be used to get nanomechanical characterization at different scale levels in an interpretative and diagnostic context.

Keywords: cytoskeleton, extracellular matrix, mechanical properties, particle tracking microrheology, tumor

Procedia PDF Downloads 275
1753 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 316
1752 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 67
1751 Value Proposition and Value Creation in Network Environments: An Experimental Study of Academic Productivity via the Application of Bibliometrics

Authors: R. Oleko, A. Saraceni

Abstract:

The aim of this research is to provide a rigorous evaluation of the existing academic productivity in relation to value proposition and creation in networked environments. Bibliometrics is a vigorous approach used to structure existing literature in an objective and reliable manner. To that aim, a thorough bibliometric analysis was performed in order to assess the large volume of the information encountered in a structured and reliable manner. A clear distinction between networks and service networks was considered indispensable in order to capture the effects of each network’s type properties on value creation processes. Via the use of bibliometric parameters, this review was able to capture the state-of-the-art in both value proposition and value creation consecutively. The results provide a rigorous assessment of the annual scientific production, the most influential journals, and the leading corresponding author countries. By means of citation analysis, the most frequently cited manuscripts and countries for each network type were identified. Moreover, by means of co-citation analysis, existing collaborative patterns were detected through the creation of reference co-citation networks and country collaboration networks. Co-word analysis was also performed in order to provide an overview of the conceptual structure in both networks and service networks. The acquired results provide a rigorous and systematic assessment of the existing scientific output in networked settings. As such, they positively contribute to a better understanding of the distinct impact of service networks on value proposition and value creation when compared to regular networks. The implications derived can serve as a guide for informed decision-making by practitioners during network formation and provide a structured evaluation that can stand as a basis for future research in the field.

Keywords: bibliometrics, co-citation analysis, networks, service networks, value creation, value proposition

Procedia PDF Downloads 199
1750 An Experimental Investigation on the Fuel Characteristics of Nano-Aluminium Oxide and Nano-Cobalt Oxide Particles Blended in Diesel Fuel

Authors: S. Singh, P. Patel, D. Kachhadiya, Swapnil Dharaskar

Abstract:

The research objective is to integrate nanoparticles into fuels- i.e. diesel, biodiesel, biodiesel blended with diesel, plastic derived fuels, etc. to increase the fuel efficiency. The metal oxide nanoparticles will reduce the carbon monoxide emissions by donating oxygen atoms from their lattices to catalyze the combustion reactions and to aid complete combustion; due to this, there will be an increase in the calorific value of the blend (fuel + metal nanoparticles). Aluminium oxide and cobalt oxide nanoparticles have been synthesized by sol-gel method. The characterization was done by Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The size of the particles was determined by XRD to be 28.6 nm and 28.06 nm for aluminium oxide and cobalt oxide nanoparticles respectively. Different concentration blends- 50, 100, 150 ppm were prepared by adding the required weight of metal oxides in 1 liter of diesel and sonicating for 30 minutes at 500W. The blend properties- calorific value, viscosity, and flash point were determined by bomb calorimeter, Brookfield viscometer and pensky-martin apparatus. For the aluminum oxide blended diesel, there was a maximum increase of 5.544% in the calorific value, but at the same time, there was an increase in the flash point from 43°C to 58.5°C and an increase in the viscosity from 2.45 cP to 3.25 cP. On the other hand, for the cobalt oxide blended diesel there was a maximum increase of 2.012% in the calorific value while the flash point increased from 43°C to 51.5°C and the viscosity increased from 2.45 cP to 2.94 cP. There was a linear increase in the calorific value, viscosity and flash point when the concentration of the metal oxide nanoparticles in the blend was increased. For the 50 ppm Al₂O₃ and 50 ppm Co₃O₄ blend the increasing the calorific value was 1.228 %, and the viscosity changed from 2.45 cP to 2.64 cP and the flash point increased from 43°C to 50.5°C. Clearly the aluminium oxide nanoparticles increase the calorific value but at the cost of flash point and viscosity, thus it is better to use the 50 ppm aluminium oxide, and 50 ppm cobalt oxide blended diesel.

Keywords: aluminium oxide nanoparticles, cobalt oxide nanoparticles, fuel additives, fuel characteristics

Procedia PDF Downloads 316
1749 Machinability Analysis in Drilling Flax Fiber-Reinforced Polylactic Acid Bio-Composite Laminates

Authors: Amirhossein Lotfi, Huaizhong Li, Dzung Viet Dao

Abstract:

Interest in natural fiber-reinforced composites (NFRC) is progressively growing both in terms of academia research and industrial applications thanks to their abundant advantages such as low cost, biodegradability, eco-friendly nature and relatively good mechanical properties. However, their widespread use is still presumed as challenging because of the specificity of their non-homogeneous structure, limited knowledge on their machinability characteristics and parameter settings, to avoid defects associated with the machining process. The present work is aimed to investigate the effect of the cutting tool geometry and material on the drilling-induced delamination, thrust force and hole quality produced when drilling a fully biodegradable flax/poly (lactic acid) composite laminate. Three drills with different geometries and material were used at different drilling conditions to evaluate the machinability of the fabricated composites. The experimental results indicated that the choice of cutting tool, in terms of material and geometry, has a noticeable influence on the cutting thrust force and subsequently drilling-induced damages. The lower value of thrust force and better hole quality was observed using high-speed steel (HSS) drill, whereas Carbide drill (with point angle of 130o) resulted in the highest value of thrust force. Carbide drill presented higher wear resistance and stability in variation of thrust force with a number of holes drilled, while HSS drill showed the lower value of thrust force during the drilling process. Finally, within the selected cutting range, the delamination damage increased noticeably with feed rate and moderately with spindle speed.

Keywords: natural fiber reinforced composites, delamination, thrust force, machinability

Procedia PDF Downloads 126
1748 The Use of Microbiological Methods to Reduce Aflatoxin M1 in Cheese

Authors: Bruna Goncalves, Jennifer Henck, Romulo Uliana, Eliana Kamimura, Carlos Oliveira, Carlos Corassin

Abstract:

Studies have shown evidence of human exposure to aflatoxin M1 due to the consumption of contaminated milk and dairy products (mainly cheeses). This poses a great risk to public health, since milk and milk products are frequently consumed by a portion of the population considered immunosuppressed, children and the elderly. Knowledge of the negative impacts of aflatoxins on health and economics has led to investigations of strategies to prevent their formation in food, as well as to eliminate, inactivate or reduce the bioavailability of these toxins in contaminated products This study evaluated the effect of microbiological methods using lactic acid bacteria on aflatoxin M1 (AFM1) reduction in Minas Frescal cheese (typical Brazilian product, being among the most consumed cheeses in Brazil) spiked with 1 µg/L AFM1. Inactivated lactic acid bacteria (0,5%, v/v de L. rhamnosus e L. lactis) were added during the cheese production process. Nine cheeses were produced, divided into three treatments: negative controls (without AFM1 or lactic acid bacteria), positive controls (AFM1 only), and lactic acid bacteria + AFM1. Samples of cheese were collected on days 2, 10, 20 and 30 after the date of production and submitted to composition analyses and determination of AFM1 by high-performance liquid chromatography. The reductions of AFM1 in cheese by lactic acid bacteria at the end of the trial indicate a potential application of inactivated lactic acid bacteria in reducing the bioavailability of AFM1 in Minas frescal cheese without physical-chemical and microbiological modifications during the 30-day experimental period. The authors would like to thank São Paulo Research Foundation – FAPESP (grants #2017/20081-6 and #2017/19683-1).

Keywords: aflatoxin, milk, minas frescal cheese, decontamination

Procedia PDF Downloads 191
1747 Effect of Diet Inulin Prebiotic on Growth, Reproductive Performance, Carcass Composition and Resistance to Environmental Stresses in Zebra Danio (Danio rerio)

Authors: Ehsan Ahmadifar

Abstract:

In this research, the effects of different levels (control group (T0), (T1)1, (T2)2 and (T3)3 gr Inulin per Kg diet) of prebiotic Inulin as nutritional supplement on Danio rerio were investigated for 4 month. Since the beginning of feeding larvae until adult (average weight: 67.1 g, length: 4.5 cm) were fed with experimental diets. The survival rate of fish had no significant effect on rate survival (P > 0.05). The highest food conversion ratio (FCR) was in control group and the lowest was observed in T3. Treatment of T3 significantly caused the best feed conversion ratio in Zebra fish (P < 0.05). By increasing the inulin diet during the experiment, specific growth rate increased. The highest and the lowest body weight gain and condition factor were observed in T3 and control, respectively (P < 0.05). Adding 3 gr inulin in Zebra fish diet can improve the performance of the growth indices and final biomass, also this prebiotic can be considered as a suitable supplement for Cyprinidae diet. In the first sampling stage for feeding fish, fat and muscle protein was significantly higher than the second sampling stage (P < 0.05). Given that the second stage fish were full sexual maturity, the amount of fat in muscle decreased (P < 0.05). Moisture and ash levels were significantly (P < 0.05) higher in the second stage sampling than the first stage. Overall, different stage of living affected on muscle chemical composition muscle. Reproductive performance in treatment T2 and T3 were significantly higher than other treatments (P < 0.05). According to the results, the prebiotic inulin does not have a significant impact on the sex ratio in zebrafish (P > 0.05). Based on histology of the gonads, the use of dietary inulin accelerates the process of gonad development in zebrafish.

Keywords: inulin, zebrafish, reproduction, histology

Procedia PDF Downloads 303
1746 The Process of Irony Comprehension in Young Children: Evidence from Monolingual and Bilingual Preschoolers

Authors: Natalia Banasik

Abstract:

Comprehension of verbal irony is an example of pragmatic competence in understanding figurative language. The knowledge of how it develops may shed new light on the understanding of social and communicative competence that is crucial for one's effective functioning in the society. Researchers agree it is a competence that develops late in a child’s development. One of the abilities that seems crucial for irony comprehension is theory of mind (ToM), that is the ability to understand that others may have beliefs, desires and intentions different from one’s own. Although both theory of mind and irony comprehension require the ability to understand the figurative use of the false description of the reality, the exact relationship between them is still unknown. Also, even though irony comprehension in children has been studied for over thirty years, the results of the studies are inconsistent as to the age when this competence are acquired. The presented study aimed to answer questions about the developmental trajectories of irony comprehension and ascribing function to ironic utterances by preschool children. Specifically, we were interested in how it is related to the development of ToM and how comprehension of the function of irony changes with age. Data was collected from over 150 monolingual, Polish-speaking children and (so far) thirty bilingual children speaking Polish and English who live in the US. Four-, five- and six-year-olds were presented with a story comprehension task in the form of audio and visual stimuli programmed in the E-prime software (pre-recorded narrated stories, some of which included ironic utterances, and pictures accompanying the stories displayed on a touch screen). Following the presentation, the children were then asked to answer a series of questions. The questions checked the children’s understanding of the intended utterance meaning, evaluation of the degree to which it was funny and evaluation of how nice the speaker was. The children responded by touching the screen, which made it possible to measure reaction times. Additionally, the children were asked to explain why the speaker had uttered the ironic statement. Both quantitive and qualitative analyses were applied. The results of our study indicate that for irony recognition there is a significant difference among the three age groups, but what is new is that children as young as four do understand the real meaning behind the ironic statement as long as the utterance is not grammtically or lexically complex also, there is a clear correlation of ToM and irony comprehension. Although four-year olds and six-year olds understand the real meaning of the ironic utterance, it is not earlier than at the age of six when children start to explain the reason of using this marked form of expression. They talk about the speaker's intention to tell a joke, be funny, or to protect the listener's emotions. There are also some metalinguistic references, such as "mommy sometimes says things that don't make sense and this is called a metaphor".

Keywords: child's pragmatics, figurative speech, irony comprehension in children, theory of mind and irony

Procedia PDF Downloads 310
1745 Nimbus Radiance Gate Project: Media Architecture in Sacred Space

Authors: Jorge Duarte de Sá

Abstract:

The project presented in this investigation is part of the multidisciplinary field of Architecture and explores an experience in media architecture, integrated in Arts, Science and Technology. The objective of this work is to create a visual experience comprehending Architecture, Media and Art. It is intended to specifically explore the sacred spaces that are losing social, cultural or religious dynamics and insert new Media technologies to create a new generate momentum, testing tools, techniques and methods of implementation. Given an architectural project methodology, it seems essential that 'the location' should be the starting point for the development of this technological apparatus: the church of Santa Clara in Santarém, Portugal emerged as an experimental space for apparatus, presenting itself as both temple and museum. We also aim to address the concept of rehabilitation through media technologies, directed at interventions that may have an impact on energizing spaces. The idea is emphasized on the rehabilitation of spaces that, one way or another, may gain new dynamics after a media intervention. Thus, we intend to affect the play with a sensitive and spiritual character which endemically, sacred spaces have, by exploring a sensitive aspect of the subject and drawing up new ideas for meditation and spiritual reflection. The work is designed primarily as a visual experience that encompasses the space, the object and the subject. It is a media project supported by a dual structure with two transparent screens operating in a holographic screen which will be projecting two images that complement the translucent overlay film, thus making the merger of two projections. The digitally created content reacts to the presence of observers through infrared cameras, placed strategically. The object revives the memory of the altarpiece as an architectural surface, promoting the expansion of messages through the media technologies.

Keywords: architecture, media, sacred, technology

Procedia PDF Downloads 274
1744 The Value of Computerized Corpora in EFL Textbook Design: The Case of Modal Verbs

Authors: Lexi Li

Abstract:

This study aims to contribute to the field of how computer technology can be exploited to enhance EFL textbook design. Specifically, the study demonstrates how computerized native and learner corpora can be used to enhance modal verb treatment in EFL textbooks. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because the pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the “secondary school” section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was compared with the textbook corpus in terms of the use (distributional features, semantic functions, and co-occurring constructions) in order to examine the degree of influence of the textbook on learners’ use of modal verbs. Moreover, the learner corpus was analyzed for the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The results indicate discrepancies between the textbook presentation of modal verbs and authentic modal use in natural discourse in terms of distributions of frequencies, semantic functions, and co-occurring structures. Furthermore, there are consistent patterns of use between the learner corpus and the textbook corpus with respect to the three above-mentioned aspects, except could, will and must, partially confirming the correlation between the frequency effects and L2 grammar acquisition. Further analysis reveals that the exceptions are caused by both positive and negative L1 transfer, indicating that the frequency effects can be intercepted by L1 interference. Besides, error analysis revealed that could, would, should and must are the most difficult for Chinese learners due to both inter-linguistic and intra-linguistic interference. The discrepancies between the textbook corpus and the native corpus point to a need to adjust the presentation of modal verbs in the textbooks in terms of frequencies, different meanings, and verb-phrase structures. Along with the adjustment of modal verb treatment based on authentic use, it is important for textbook writers to take into consideration the L1 interference as well as learners’ difficulties in their use of modal verbs. The present study is a methodological showcase of the combination both native and learner corpora in the enhancement of EFL textbook language authenticity and appropriateness for learners.

Keywords: EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 122
1743 A Comprehensive Study and Evaluation on Image Fashion Features Extraction

Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen

Abstract:

Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.

Keywords: convolutional neural network, feature representation, image processing, machine modelling

Procedia PDF Downloads 136
1742 Mental Imagery as an Auxiliary Tool to the Performance of Elite Competitive Swimmers of the University of the East Manila

Authors: Hillary Jo Muyalde

Abstract:

Introduction: Elite athletes train regularly to enhance their physical endurance, but sometimes, training sessions are not enough. When competition comes, these athletes struggle to find focus. Mental imagery is a psychological technique that helps condition the mind to focus and eventually help improve performance. This study aims to help elite competitive swimmers of the University of the East improve their performance with Mental Imagery as an auxiliary tool. Methodology: The study design used was quasi-experimental with a purposive sampling technique and a within-subject design. It was conducted with a total of 41 participants. The participants were given a Sport Imagery Ability Questionnaire (SIAQ) to measure imagery ability and the Mental Imagery Program. The study utilized a Paired T-test for data analysis where the participants underwent six weeks of no mental imagery training and were compared to six weeks with the Mental Imagery Program (MIP). The researcher recorded the personal best time of participants in their respective specialty stroke. Results: The results of the study showed a t-value of 17.804 for Butterfly stroke events, 9.922 for Backstroke events, 7.787 for Breaststroke events, and 17.440 in Freestyle. This indicated that MIP had a positive effect on participants’ performance. The SIAQ result also showed a big difference where -10.443 for Butterfly events, -5.363 for Backstroke, -7.244 for Breaststroke events, and -10.727 for Freestyle events, which meant the participants were able to image better than before MIP. Conclusion: In conclusion, the findings of this study showed that there is indeed an improvement in the performance of the participants after the application of the Mental Imagery Program. It is recommended from this study that the participants continue to use mental imagery as an auxiliary tool to their training regimen for continuous positive results.

Keywords: mental Imagery, personal best time, SIAQ, specialty stroke

Procedia PDF Downloads 77
1741 Agronomic Evaluation of Flax Cultivars (Linum Usitatissimum L.) in Response to Irrigation Intervals

Authors: Emad Rashwan, M. Mousa, Ayman EL Sabagh, Celaleddin Barutcular

Abstract:

Flax is a potential winter crop for Egypt that can be grown for both seed and fiber. The study was conducted during two successive winter seasons of 2013/2014, and 2014/2015 in the experimental farm of El-Gemmeiza Agricultural Research Station, Agriculture research Centre, Egypt. The objective of this work was to evaluate the effect of irrigation intervals (25, 35 and 45) on the seed yield and quality of flax cultivars (Sakha1, Giza9 and Giza10). Obtained results indicate that highly significant for all studied traits among irrigation intervals except oil percentage that was not significant in both seasons. Irrigated flax plants every 35 days gave the maximum values for all characters. In contrast, irrigation every 45 days gave the minimum values for all studied characters under this study. In respect to cultivars, significant differences in most yield and quality characters were found. Furthermore, the performance of Sakha1 cultivar was superior in total plant height, main stem diameter, seed index, seed, oil, biological and straw yield /ha as well as fiber length and fiber fineness. Meanwhile, Giza9 and Giza10 cultivars were surpassed in fiber yield/hand fiber percentage, respectively. The interactions between irrigation intervals and flax cultivars were highly significant for total plant height, main stem diameter, seed, oil, biological and straw yields /ha. Based on the results, all flax cultivars recorded the maximum values for major traits were measured under irrigation of flax plants every 35 days.

Keywords: flax, fiber, irrigation intervals, oil, seed yield

Procedia PDF Downloads 250
1740 The Legal Nature of Grading Decisions and the Implications for Handling of Academic Complaints in or out of Court: A Comparative Legal Analysis of Academic Litigation in Europe

Authors: Kurt Willems

Abstract:

This research examines complaints against grading in higher education institutions in four different European regions: England and Wales, Flanders, the Netherlands, and France. The aim of the research is to examine the correlation between the applicable type of complaint handling on the one hand, and selected qualities of the higher education landscape and of public law on the other hand. All selected regions report a rising number of complaints against grading decisions, not only as to internal complaint handling within the institution but also judicially if the dispute persists. Some regions deem their administrative court system appropriate to deal with grading disputes (France) or have even erected a specialty administrative court to facilitate access (Flanders, the Netherlands). However, at the same time, different types of (governmental) dispute resolution bodies have been established outside of the judicial court system (England and Wales, and to lesser extent France and the Netherlands). Those dispute procedures do not seem coincidental. Public law issues such as the underlying legal nature of the education institution and, eventually, the grading decision itself, have an impact on the way the academic complaint procedures are developed. Indeed, in most of the selected regions, contractual disputes enjoy different legal protection than administrative decisions, making the legal qualification of the relationship between student and higher education institution highly relevant. At the same time, the scope of competence of government over different types of higher education institutions; albeit direct or indirect (o.a. through financing and quality control) is relevant as well to comprehend why certain dispute handling procedures have been established for students. To answer the above questions, the doctrinal and comparative legal method is used. The normative framework is distilled from the relevant national legislative rules and their preparatory texts, the legal literature, the (published) case law of academic complaints and the available governmental reports. The research is mainly theoretical in nature, examining different topics of public law (mainly administrative law) and procedural law in the context of grading decisions. The internal appeal procedure within the education institution is largely left out of the scope of the research, as well as different types of non-governmental-imposed cooperation between education institutions, given the public law angle of the research questions. The research results in the categorization of different academic complaint systems, and an analysis of the possibility to introduce each of those systems in different countries, depending on their public law system and higher education system. By doing so, the research also adds to the debate on the public-private divide in higher education systems, and its effect on academic complaints handling.

Keywords: higher education, legal qualification of education institution, legal qualification of grading decisions, legal protection of students, academic litigation

Procedia PDF Downloads 229
1739 Towards Designing of a Potential New HIV-1 Protease Inhibitor Using Quantitative Structure-Activity Relationship Study in Combination with Molecular Docking and Molecular Dynamics Simulations

Authors: Mouna Baassi, Mohamed Moussaoui, Hatim Soufi, Sanchaita RajkhowaI, Ashwani Sharma, Subrata Sinha, Said Belaaouad

Abstract:

Human Immunodeficiency Virus type 1 protease (HIV-1 PR) is one of the most challenging targets of antiretroviral therapy used in the treatment of AIDS-infected people. The performance of protease inhibitors (PIs) is limited by the development of protease mutations that can promote resistance to the treatment. The current study was carried out using statistics and bioinformatics tools. A series of thirty-three compounds with known enzymatic inhibitory activities against HIV-1 protease was used in this paper to build a mathematical model relating the structure to the biological activity. These compounds were designed by software; their descriptors were computed using various tools, such as Gaussian, Chem3D, ChemSketch and MarvinSketch. Computational methods generated the best model based on its statistical parameters. The model’s applicability domain (AD) was elaborated. Furthermore, one compound has been proposed as efficient against HIV-1 protease with comparable biological activity to the existing ones; this drug candidate was evaluated using ADMET properties and Lipinski’s rule. Molecular Docking performed on Wild Type and Mutant Type HIV-1 proteases allowed the investigation of the interaction types displayed between the proteases and the ligands, Darunavir (DRV) and the new drug (ND). Molecular dynamics simulation was also used in order to investigate the complexes’ stability, allowing a comparative study of the performance of both ligands (DRV & ND). Our study suggested that the new molecule showed comparable results to that of Darunavir and may be used for further experimental studies. Our study may also be used as a pipeline to search and design new potential inhibitors of HIV-1 proteases.

Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation.

Procedia PDF Downloads 33