Search results for: arc measure
619 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions
Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani
Abstract:
Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.Keywords: curing, durability, maturity, strength
Procedia PDF Downloads 300618 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans
Authors: O. Ekrami, P. Claes, S. Van Dongen
Abstract:
Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing
Procedia PDF Downloads 139617 A Small-Scale Survey on Risk Factors of Musculoskeletal Disorders in Workers of Logistics Companies in Cyprus and on the Early Adoption of Industrial Exoskeletons as Mitigation Measure
Authors: Kyriacos Clerides, Panagiotis Herodotou, Constantina Polycarpou, Evagoras Xydas
Abstract:
Background: Musculoskeletal disorders (MSDs) in the workplace is a very common problem in Europe which are caused by multiple risk factors. In recent years, wearable devices and exoskeletons for the workplace have been trying to address the various risk factors that are associated with strenuous tasks in the workplace. The logistics sector is a huge sector that includes warehousing, storage, and transportation. However, the task associated with logistics is not well-studied in terms of MSDs risk. This study was aimed at looking into the MSDs affecting workers of logistics companies. It compares the prevalence of MSDs among workers and evaluates multiple risk factors that contribute to the development of MSDs. Moreover, this study seeks to obtain user feedback on the adoption of exoskeletons in such a work environment. Materials and Methods: The study was conducted among workers in logistics companies in Nicosia, Cyprus, from July to September 2022. A set of standardized questionnaires was used for collecting different types of data. Results: A high proportion of logistics professionals reported MSDs in one or more other body regions, the lower back being the most commonly affected area. Working in the same position for long periods, working in awkward postures, and handling an excessive load, were found to be the most commonly reported job risk factor that contributed to the development of MSDs, in this study. A significant number of participants consider the back region as the most to be benefited from a wearable exoskeleton device. Half of the participants would like to have at least a 50% reduction in their daily effort. The most important characteristics for the adoption of exoskeleton devices were found to be how comfortable the device is and its weight. Conclusion: Lower back and posture were the highest risk factors among all logistics professionals assessed in this study. A larger scale study using quantitative analytical tools may give a more accurate estimate of MSDs, which would pave the way for making more precise recommendations to eliminate the risk factors and thereby prevent MSDs. A follow-up study using exoskeletons in the workplace should be done to assess whether they assist in MSD prevention.Keywords: musculoskeletal disorders, occupational health, safety, occupational risk, logistic companies, workers, Cyprus, industrial exoskeletons, wearable devices
Procedia PDF Downloads 105616 Recycling Waste Product for Metal Removal from Water
Authors: Saidur R. Chowdhury, Mamme K. Addai, Ernest K. Yanful
Abstract:
The research was performed to assess the potential of nickel smelter slag, an industrial waste, as an adsorbent in the removal of metals from aqueous solution. An investigation was carried out for Arsenic (As), Copper (Cu), lead (Pb) and Cadmium (Cd) adsorption from aqueous solution. Smelter slag was obtain from Ni ore at the Vale Inco Ni smelter in Sudbury, Ontario, Canada. The batch experimental studies were conducted to evaluate the removal efficiencies of smelter slag. The slag was characterized by surface analytical techniques. The slag contained different iron oxides and iron silicate bearing compounds. In this study, the effect of pH, contact time, particle size, competition by other ions, slag dose and distribution coefficient were evaluated to measure the optimum adsorption conditions of the slag as an adsorbent for As, Cu, Pb and Cd. The results showed 95-99% removal of As, Cu, Pb, and almost 50-60% removal of Cd, while batch experimental studies were conducted at 5-10 mg/L of initial concentration of metals, 10 g/L of slag doses, 10 hours of contact time and 170 rpm of shaking speed and 25oC condition. The maximum removal of Arsenic (As), Copper (Cu), lead (Pb) was achieved at pH 5 while the maximum removal of Cd was found after pH 7. The column experiment was also conducted to evaluate adsorption depth and service time for metal removal. This study also determined adsorption capacity, adsorption rate and mass transfer rate. The maximum adsorption capacity was found to be 3.84 mg/g for As, 4 mg/g for Pb, and 3.86 mg/g for Cu. The adsorption capacity of nickel slag for the four test metals were in decreasing order of Pb > Cu > As > Cd. Modelling of experimental data with Visual MINTEQ revealed that saturation indices of < 0 were recorded in all cases suggesting that the metals at this pH were under- saturated and thus in their aqueous forms. This confirms the absence of precipitation in the removal of these metals at the pHs. The experimental results also showed that Fe and Ni leaching from the slag during the adsorption process was found to be very minimal, ranging from 0.01 to 0.022 mg/L indicating the potential adsorbent in the treatment industry. The study also revealed that waste product (Ni smelter slag) can be used about five times more before disposal in a landfill or as a stabilization material. It also highlighted the recycled slags as a potential reactive adsorbent in the field of remediation engineering. It also explored the benefits of using renewable waste products for the water treatment industry.Keywords: adsorption, industrial waste, recycling, slag, treatment
Procedia PDF Downloads 144615 Frustration Measure for Dipolar Spin Ice and Spin Glass
Authors: Konstantin Nefedev, Petr Andriushchenko
Abstract:
Usually under the frustrated magnetics, it understands such materials, in which ones the interaction between located magnetic moments or spins has competing character, and can not to be satisfied simultaneously. The most well-known and simplest example of the frustrated system is antiferromagnetic Ising model on the triangle. Physically, the existence of frustrations means, that one cannot select all three pairs of spins anti-parallel in the basic unit of the triangle. In physics of the interacting particle systems, the vector models are used, which are constructed on the base of the pair-interaction law. Each pair interaction energy between one-component vectors can take two opposite in sign values, excluding the case of zero. Mathematically, the existence of frustrations in system means that it is impossible to have all negative energies of pair interactions in the Hamiltonian even in the ground state (lowest energy). In fact, the frustration is the excitation, which leaves in system, when thermodynamics does not work, i.e. at the temperature absolute zero. The origin of the frustration is the presence at least of one ''unsatisfied'' pair of interacted spins (magnetic moments). The minimal relative quantity of these excitations (relative quantity of frustrations in ground state) can be used as parameter of frustration. If the energy of the ground state is Egs, and summary energy of all energy of pair interactions taken with a positive sign is Emax, that proposed frustration parameter pf takes values from the interval [0,1] and it is defined as pf=(Egs+Emax)/2Emax. For antiferromagnetic Ising model on the triangle pf=1/3. We calculated the parameters of frustration in thermodynamic limit for different 2D periodical structures of Ising dipoles, which were on the ribs of the lattice and interact by means of the long-range dipolar interaction. For the honeycomb lattice pf=0.3415, triangular - pf=0.2468, kagome - pf=0.1644. All dependencies of frustration parameter from 1/N obey to the linear law. The given frustration parameter allows to consider the thermodynamics of all magnetic systems from united point of view and to compare the different lattice systems of interacting particle in the frame of vector models. This parameter can be the fundamental characteristic of frustrated systems. It has no dependence from temperature and thermodynamic states, in which ones the system can be found, such as spin ice, spin glass, spin liquid or even spin snow. It shows us the minimal relative quantity of excitations, which ones can exist in system at T=0.Keywords: frustrations, parameter of order, statistical physics, magnetism
Procedia PDF Downloads 169614 Seismic Assessment of a Pre-Cast Recycled Concrete Block Arch System
Authors: Amaia Martinez Martinez, Martin Turek, Carlos Ventura, Jay Drew
Abstract:
This study aims to assess the seismic performance of arch and dome structural systems made from easy to assemble precast blocks of recycled concrete. These systems have been developed by Lock Block Ltd. Company from Vancouver, Canada, as an extension of their currently used retaining wall system. The characterization of the seismic behavior of these structures is performed by a combination of experimental static and dynamic testing, and analytical modeling. For the experimental testing, several tilt tests, as well as a program of shake table testing were undertaken using small scale arch models. A suite of earthquakes with different characteristics from important past events are chosen and scaled properly for the dynamic testing. Shake table testing applying the ground motions in just one direction (in the weak direction of the arch) and in the three directions were conducted and compared. The models were tested with increasing intensity until collapse occurred; which determines the failure level for each earthquake. Since the failure intensity varied with type of earthquake, a sensitivity analysis of the different parameters was performed, being impulses the dominant factor. For all cases, the arches exhibited the typical four-hinge failure mechanism, which was also shown in the analytical model. Experimental testing was also performed reinforcing the arches using a steel band over the structures anchored at both ends of the arch. The models were tested with different pretension levels. The bands were instrumented with strain gauges to measure the force produced by the shaking. These forces were used to develop engineering guidelines for the design of the reinforcement needed for these systems. In addition, an analytical discrete element model was created using 3DEC software. The blocks were designed as rigid blocks, assigning all the properties to the joints including also the contribution of the interlocking shear key between blocks. The model is calibrated to the experimental static tests and validated with the obtained results from the dynamic tests. Then the model can be used to scale up the results to the full scale structure and expanding it to different configurations and boundary conditions.Keywords: arch, discrete element model, seismic assessment, shake-table testing
Procedia PDF Downloads 205613 Proposals to Increase the Durability of Concrete Affected by Acid Mine Waters
Authors: Cristian Rodriguez, Jose M. Davila, Aguasanta M. Sarmiento, María L. de la Torre
Abstract:
There are many acidic environments that degrade structural concrete, such as those found in water treatment plants, sports facilities, and more, but one of the most aggressive is undoubtedly the water from acid mine drainage. This phenomenon occurs in all pyrite mining facilities and, to a lesser extent, in coal mines and is characterised by very low pH values and high sulphate, metal, and metalloid contents. This phenomenon causes significant damage to the concrete, mainly attacking the binder. In addition, the process is accentuated by the action of acidophilic bacteria, which accelerate the cracking of the concrete. Due to the damage that concrete experiences in acidic environments, the authors of this study aimed to enhance its performance in various aspects. Thus, two solutions have been proposed to improve the concrete durability, acting both on the mass of the material itself with the incorporation of fibres, and on its surface, proposing treatments with two different paints. The incorporation of polypropylene fibres in the concrete mass aims to improve the tensile strength of concrete, being this parameter the most affected in this type of degradation. The protection of the concrete with surface paint is intended to improve the performance against abrasion while reducing the access of water to the interior of the mass of the material. Sulpho-resistant cement has been used in all the mass concrete mixtures that have been prepared, in addition to complying with the requirements of the current Spanish standard, equivalent to the Eurocodes. For the polypropylene fibres, two alternatives have been used, with 1.7 and 3.4 kg/m³, while as surface treatment, the use of two paints has been analysed, one based on polyurethane and the other on asphalt-type paint. The proposed treatments have been analysed by means of indirect tensile tests and pressure sandblasting, thus analysing the effects of abrasion. The results obtained have confirmed a slight increase in the tensile strength of mass concrete by incorporating polypropylene fibres, being slightly higher for a ratio of 3.4 kg/m³, with an improvement of slightly more than 5% in the tensile strength of concrete. However, the use of fibres in concrete greatly reduces the loss of concrete mass due to abrasion. This improvement against abrasion is even more significant when paint is used as an external protection measure, with a much lower loss of mass with both paints. Acknowledgments: This work has been supported by MICIU/AEI/10.13039/501100011033/FEDER, UE, throughout the project PID2021-123130OB-I00.Keywords: degradation, concrete, tensile strength, abrasion
Procedia PDF Downloads 12612 Profile of Programmed Death Ligand-1 (PD-L1) Expression and PD-L1 Gene Amplification in Indonesian Colorectal Cancer Patients
Authors: Akterono Budiyati, Gita Kusumo, Teguh Putra, Fritzie Rexana, Antonius Kurniawan, Aru Sudoyo, Ahmad Utomo, Andi Utama
Abstract:
The presence of the programmed death ligand-1 (PD-L1) has been used in multiple clinical trials and approved as biomarker for selecting patients more likely to respond to immune checkpoint inhibitors. However, the expression of PD-L1 is regulated in different ways, which leads to a different significance of its presence. Positive PD-L1 within tumors may result from two mechanisms, induced PD-L1 expression by T-cell presence or genetic mechanism that lead to constitutive PD-L1 expression. Amplification of PD-L1 genes was found as one of genetic mechanism which causes an increase in PD-L1 expression. In case of colorectal cancer (CRC), targeting immune checkpoint inhibitor has been recommended for patients with microsatellite instable (MSI). Although the correlation between PD-L1 expression and MSI status has been widely studied, so far the precise mechanism of PD-L1 gene activation in CRC patients, particularly in MSI population have yet to be clarified. In this present study we have profiled 61 archived formalin fixed paraffin embedded CRC specimens of patients from Medistra Hospital, Jakarta admitted in 2010 - 2016. Immunohistochemistry was performed to measure expression of PD-L1 in tumor cells as well as MSI status using antibodies against PD-L1 and MMR (MLH1, MSH2, PMS2 and MSH6), respectively. PD-L1 expression was measured on tumor cells with cut off of 1% whereas loss of nuclear MMR protein expressions in tumor cells but not in normal or stromal cells indicated presence of MSI. Subset of PD-L1 positive patients was then assessed for copy number variations (CNVs) using single Tube TaqMan Copy Number Assays Gene CD247PD-L1. We also observed KRAS mutation to profile possible genetic mechanism leading to the presence or absence of PD-L1 expression. Analysis of 61 CRC patients revealed 15 patients (24%) expressed PD-L1 on their tumor cell membranes. The prevalence of surface membrane PD-L1 was significantly higher in patients with MSI (87%; 7/8) compared to patients with microsatellite stable (MSS) (15%; 8/53) (P=0.001). Although amplification of PD-L1 gene was not found among PD-L1 positive patients, low-level amplification of PD-L1 gene was commonly observed in MSS patients (75%; 6/8) than in MSI patients (43%; 3/7). Additionally, we found 26% of CRC patients harbored KRAS mutations (16/61), so far the distribution of KRAS status did not correlate with PD-L1 expression. Our data suggest genetic mechanism through amplification of PD-L1 seems not to be the mechanism underlying upregulation of PD-L1 expression in CRC patients. However, further studies are warranted to confirm the results.Keywords: colorectal cancer, gene amplification, microsatellite instable, programmed death ligand-1
Procedia PDF Downloads 221611 The High Precision of Magnetic Detection with Microwave Modulation in Solid Spin Assembly of NV Centres in Diamond
Authors: Zongmin Ma, Shaowen Zhang, Yueping Fu, Jun Tang, Yunbo Shi, Jun Liu
Abstract:
Solid-state quantum sensors are attracting wide interest because of their high sensitivity at room temperature. In particular, spin properties of nitrogen–vacancy (NV) color centres in diamond make them outstanding sensors of magnetic fields, electric fields and temperature under ambient conditions. Much of the work on NV magnetic sensing has been done so as to achieve the smallest volume, high sensitivity of NV ensemble-based magnetometry using micro-cavity, light-trapping diamond waveguide (LTDW), nano-cantilevers combined with MEMS (Micro-Electronic-Mechanical System) techniques. Recently, frequency-modulated microwaves with continuous optical excitation method have been proposed to achieve high sensitivity of 6 μT/√Hz using individual NV centres at nanoscale. In this research, we built-up an experiment to measure static magnetic field through continuous wave optical excitation with frequency-modulated microwaves method under continuous illumination with green pump light at 532 nm, and bulk diamond sample with a high density of NV centers (1 ppm). The output of the confocal microscopy was collected by an objective (NA = 0.7) and detected by a high sensitivity photodetector. We design uniform and efficient excitation of the micro strip antenna, which is coupled well with the spin ensembles at 2.87 GHz for zero-field splitting of the NV centers. Output of the PD signal was sent to an LIA (Lock-In Amplifier) modulated signal, generated by the microwave source by IQ mixer. The detected signal is received by the photodetector, and the reference signal enters the lock-in amplifier to realize the open-loop detection of the NV atomic magnetometer. We can plot ODMR spectra under continuous-wave (CW) microwave. Due to the high sensitivity of the lock-in amplifier, the minimum detectable value of the voltage can be measured, and the minimum detectable frequency can be made by the minimum and slope of the voltage. The magnetic field sensitivity can be derived from η = δB√T corresponds to a 10 nT minimum detectable shift in the magnetic field. Further, frequency analysis of the noise in the system indicates that at 10Hz the sensitivity less than 10 nT/√Hz.Keywords: nitrogen-vacancy (NV) centers, frequency-modulated microwaves, magnetic field sensitivity, noise density
Procedia PDF Downloads 437610 The Influence of Production Hygiene Training on Farming Practices Employed by Rural Small-Scale Organic Farmers - South Africa
Authors: Mdluli Fezile, Schmidt Stefan, Thamaga-Chitja Joyce
Abstract:
In view of the frequently reported foodborne disease outbreaks caused by contaminated fresh produce, consumers have a preference for foods that meet requisite hygiene standards to reduce the risk of foodborne illnesses. Producing good quality fresh produce then becomes critical in improving market access and food security, especially for small-scale farmers. Questions of hygiene and subsequent microbiological quality in the rural small-scale farming sector of South Africa are even more crucial, given the policy drive to develop small-scale farming as a measure for reinforcement of household food security and reduction of poverty. Farming practices and methods, throughout the fresh produce value chain, influence the quality of the final product, which in turn determines its success in the market. This study’s aim was to therefore determine the extent to which training on organic farming methods, including modules such as Importance of Production Hygiene, influenced the hygienic farming practices employed by eTholeni small-scale organic farmers in uMbumbulu, KwaZulu-Natal- South Africa. Questionnaires were administered to 73 uncertified organic farmers and analysis showed that a total of 33 farmers were trained and supplied the local Agri-Hub while 40 had not received training. The questionnaire probed respondents’ attitudes, knowledge of hygiene and composting practices. Data analysis included descriptive statistics such as the Chi-square test and a logistic regression model. Descriptive analysis indicated that a majority of the farmers (60%) were female, most of which (73%) were above the age of 40. The logistic regression indicated that factors such as farmer training and prior experience in the farming sector had a significant influence on hygiene practices both at 5% significance levels. These results emphasize the importance of training, education and farming experience in implementing good hygiene practices in small-scale farming. It is therefore recommended that South African policies should advocate for small-scale farmer training, not only for subsistence purposes, but also with an aim of supplying produce markets with high fresh produce.Keywords: small-scale farmers, leafy salad vegetables, organic produce, food safety, hygienic practices, food security
Procedia PDF Downloads 423609 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services
Authors: Carlos V. Gonzalez
Abstract:
Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.Keywords: harm, patients, resilience, safety, mental illness, disability
Procedia PDF Downloads 447608 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System
Authors: Zhang Kehan, Du Luona
Abstract:
Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer
Procedia PDF Downloads 343607 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques
Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh
Abstract:
In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network
Procedia PDF Downloads 69606 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy
Authors: May Fadheel Estephan, Richard Perks
Abstract:
Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics
Procedia PDF Downloads 78605 Syngas From Polypropylene Gasification in a Fluidized Bed
Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo
Abstract:
In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle
Procedia PDF Downloads 26604 The Impact of Dog-Assisted Wellbeing Intervention on Student Motivation and Affective Engagement in the Primary and Secondary School Setting
Authors: Yvonne Howard
Abstract:
This project currently under development is centered around current learning processes, including a thorough literature review and ongoing practical experiences gained as a deputy head in a school. These daily experiences with students engaging in animal-assisted interventions and the school therapy dog form a strong base for this research. The primary objective of this research is to comprehensively explore the impact of dog-assisted well-being interventions on student motivation and affective engagement within primary and secondary school settings. The educational domain currently encounters a significant challenge due to the lack of substantial research in this area. Despite the perceived positive outcomes of such interventions being acknowledged and shared in various settings, the evidence supporting their effectiveness in an educational context remains limited. This study aims to bridge the gap in the research and shed light on the potential benefits of dog-assisted well-being interventions in promoting student motivation and affective engagement. The significance of this topic recognizes that education is not solely confined to academic achievement but encompasses the overall well-being and emotional development of students. Over recent years, there has been a growing interest in animal-assisted interventions, particularly in healthcare settings. This interest has extended to the educational context. While the effectiveness of these interventions in these areas has been explored in other fields, the educational sector lacks comprehensive research in this regard. Through a systematic and thorough research methodology, this study seeks to contribute valuable empirical data to the field, providing evidence to support informed decision-making regarding the implementation of dog-assisted well-being interventions in schools. This research will utilize a mixed-methods design, combining qualitative and quantitative measures to assess the research objectives. The quantitative phase will include surveys and standardized scales to measure student motivation and affective engagement, while the qualitative phase will involve interviews and observations to gain in-depth insights from students, teachers, and other stakeholders. The findings will contribute evidence-based insights, best practices, and practical guidelines for schools seeking to incorporate dog-assisted interventions, ultimately enhancing student well-being and improving educational outcomes.Keywords: therapy dog, wellbeing, engagement, motivation, AAI, intervention, school
Procedia PDF Downloads 76603 Effects of Lipoic Acid Supplementation on Activities of Cyclooxygenases and Levels of Prostaglandins E2 and F2 Alpha Metabolites in the Offspring of Rats with Streptozocin-Induced Diabetes
Authors: H. Y. Al-Matubsi, G. A. Oriquat, M. Abu-Samak, O. A. Al Hanbali, M. Salim
Abstract:
Background: Uncontrolled diabetes mellitus (DM) is an etiological factor for recurrent pregnancy loss and major congenital malformations in the offspring. Antioxidant therapy has been advocated to overcome the oxidant-antioxidant disequilibrium inherent in diabetes. The aims of this study were to evaluate the protective effect of lipoic acid (LA) on fetal outcome and to elucidate changes that may be involved in the mechanism(s) implicit diabetic fetopathy. Methods: Female rats were rendered hyperglycemic using streptozocin and then mated with normal male rat. Pregnant non-diabetic (group1; n=9; and group2; n=7) or pregnant diabetic (group 3; n=10; and group 4; n=8) rats were treated daily with either lipoic acid (LA) (30 mg/kg body weight; groups 2 and 4) or vehicle (groups 1 and 3) between gestational days 0 and 15. On day 15 of gestation, the rats were sacrificed, and the fetuses, placentas and membranes dissected out of the uterine horns. Following morphological examination, the fetuses, placentas and membranes were homogenized, and used to measure cyclooxygenases (COX) activities and metabolisms of prostaglandin (PG) E2 (PGEM) and PGF2 (PGFM) levels. Maternal liver and plasma total glutathione levels were also determined. Results: Supplementation of diabetic rats with LA was found to significantly (P<0.05) reduce resorption rates in diabetic rats and increased mean fetal weight compared to diabetic group. Treatment of diabetic rats with LA leads to a significant (P<0.05) increase in liver and plasma total glutathione, in comparison with diabetic rats. Decreased levels of PGEM and elevated levels of PGFM in the fetuses, placentas and membranes were characteristic of experimental diabetic gestation associated with malformation. LA treatment to diabetic mothers failed to normalize levels of PGEM to the non-diabetic control rats. However, the levels of PGEM in malformed fetuses from LA-treated diabetic mothers was significantly (P < 0.05) higher than those in malformed fetuses from diabetic rats. Conclusions: We conclude that LA can reduce congenital malformations in the offspring of diabetic rats at day 15 of gestation. However, LA treatment did not completely prevent the occurrence of malformations, other factors, such as arachidonic acid deficiency and altered prostaglandin metabolismmay be involved in the pathogenesis of diabetes-induced congenital malformations.Keywords: diabetes, lipoic acid, pregnancy, prostaglandins
Procedia PDF Downloads 260602 Infusion of Skills for Undergraduate Scholarship into Teacher Education: Two Case Studies in New York and Florida
Authors: Tunde Szecsi, Janka Szilagyi
Abstract:
Students majoring in education are underrepresented in undergraduate scholarship. To enable and encourage teacher candidates to engage in scholarly activities, it is essential to infuse skills such as problem-solving, critical thinking, oral and written communication, collaboration and the utilization of information literacy, into courses in teacher preparation programs. In this empirical study, we examined two teacher education programs – one in New York State and one in Florida – in terms of the approaches of the course-based infusion of skills for undergraduate research, and the effectiveness of this infusion. First, course-related documents such as syllabi, assignment descriptions, and course activities were reviewed and analyzed. The goal of the document analysis was to identify and describe the targeted skills, and the pedagogical approaches and strategies for promoting research skills in teacher candidates. Next, a selection of teacher candidates’ scholarly products from the institution in Florida was used as a data set to examine teacher candidates’ skill development in the context of the identified assignments. This dataset was analyzed both quantitatively and qualitatively to describe the changes that occurred in teacher candidates’ critical thinking, communication, and information literacy skills, and to uncover patterns in the skill development at the two institutions. Descriptive statistics were calculated to explore the changes in these skills of teacher candidates over a period of three years. The findings based on data from the teacher education program in Florida indicated a steady gain in written communication and critical thinking and a modest increase in informational literacy. At the institution in New York, candidates’ submission and success rates on the edTPA, a New York State Teacher Certification exam, was used as a measure of scholarly skills. Overall, although different approaches were used for infusing the development of scholarly skills in the courses, the results suggest that a holistic and well-orchestrated infusion of the skills into most courses in the teacher education program might result in steadily developing scholarly skills. These results offered essential implications for teacher education programs in terms of further improvements in teacher candidates’ skills for engaging in undergraduate research and scholarship. In this presentation, our purpose is to showcase two approaches developed by two teacher education programs to demonstrate how diverse approaches toward the promotion of undergraduate scholarship activities are responsive to the context of the teacher preparation programs.Keywords: critical thinking, pedagogical strategies, teacher education, undergraduate student research
Procedia PDF Downloads 161601 Pre-Operative Psychological Factors Significantly Add to the Predictability of Chronic Narcotic Use: A Two Year Prospective Study
Authors: Dana El-Mughayyar, Neil Manson, Erin Bigney, Eden Richardson, Dean Tripp, Edward Abraham
Abstract:
Use of narcotics to treat pain has increased over the past two decades and is a contributing factor to the current public health crisis. Understanding the pre-operative risks of chronic narcotic use may be aided through investigation of psychological measures. The objective of the reported study is to determine predictors of narcotic use two years post-surgery in a thoracolumbar spine surgery population, including an array of psychological factors. A prospective observational study of 191 consecutively enrolled adult patients having undergone thoracolumbar spine surgery is presented. Baseline measures of interest included the Pain Catastrophizing Scale (PCS), Tampa Scale for Kinesiophobia, Multidimensional Scale for Perceived Social Support (MSPSS), Chronic Pain Acceptance Questionnaire (CPAQ-8), Oswestry Disability Index (ODI), Numeric Rating Scales for back and leg pain (NRS-B/L), SF-12’s Mental Component Summary (MCS), narcotic use and demographic variables. The post-operative measure of interest is narcotic use at 2-year follow-up. Narcotic use is collapsed into binary categories of use and no use. Descriptive statistics are run. Chi Square analysis is used for categorical variables and an ANOVA for continuous variables. Significant variables are built into a hierarchical logistic regression to determine predictors of post-operative narcotic use. Significance is set at α < 0.05. Results: A total of 27.23% of the sample were using narcotics two years after surgery. The regression model included ODI, NRS-Leg, time with condition, chief complaint, pre-operative drug use, gender, MCS, PCS subscale helplessness, and CPAQ subscale pain willingness and was significant χ² (13, N=191)= 54.99; p = .000. The model accounted for 39.6% of the variance in narcotic use and correctly predicted in 79.7% of cases. Psychological variables accounted for 9.6% of the variance over and above the other predictors. Conclusions: Managing chronic narcotic usage is central to the patient’s overall health and quality of life. Psychological factors in the preoperative period are significant predictors of narcotic use 2 years post-operatively. The psychological variables are malleable, potentially allowing surgeons to direct their patients to preventative resources prior to surgery.Keywords: narcotics, psychological factors, quality of life, spine surgery
Procedia PDF Downloads 143600 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices
Authors: Alena Kulikova, Tatjana Kanonire
Abstract:
Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing
Procedia PDF Downloads 79599 Investigation and Monitoring Method of Vector Density in Kaohsiung City
Authors: Chiu-Wen Chang, I-Yun Chang, Wei-Ting Chen, Hui-Ping Ho, Chao-Ying Pan, Joh-Jong Huang
Abstract:
Dengue is a ‘community disease’ or ‘environmental disease’, as long as the environment exist suitable container (including natural and artificial) for mosquito breeding, once the virus invade will lead to the dengue epidemic. Surveillance of vector density is critical to effective infectious disease control and play an important role in monitoring the dynamics of mosquitoes in community, such as mosquito species, density, distribution area. The objective of this study was to examine the relationship in vector density survey (Breteau index, Adult index, House index, Container index, and Larvae index) form 2014 to 2016 in Kaohsiung City and evaluate the effects of introducing the Breeding Elimination and Appraisal Team (hereinafter referred to as BEAT) as an intervention measure on eliminating dengue vector breeding site started from May 2016. BEAT were performed on people who were suspected of contracting dengue fever, a surrounding area measuring 50 meters by 50 meters was demarcated as the emergency prevention and treatment zone. BEAT would perform weekly vector mosquito inspections and vector mosquito inspections in regions with a high Gravitrap index and assign a risk assessment index to each region. These indices as well as the prevention and treatment results were immediately reported to epidemic prevention-related units every week. The results indicated that, vector indices from 2014 to 2016 showed no statistically significant differences in the Breteau index, adult index, and house index (p > 0.05) but statistically significant differences in the container index and larvae index (p <0.05). After executing the integrated elimination work, container index and larvae index are statistically significant different from 2014 to 2016 in the (p < 0.05). A post hoc test indicated that the container index of 2014 (M = 12.793) was significantly higher than that of 2016 (M = 7.631), and that the larvae index of 2015 (M = 34.065) was significantly lower than that of 2014 (M = 66.867). The results revealed that effective vector density surveillance could highlight the focus breeding site and then implement the immediate control action (BEAT), which successfully decreased the vector density and the risk of dengue epidemic.Keywords: Breteau index, dengue control, monitoring method, vector density
Procedia PDF Downloads 196598 Use of Shipping Containers as Office Buildings in Brazil: Thermal and Energy Performance for Different Constructive Options and Climate Zones
Authors: Lucas Caldas, Pablo Paulse, Karla Hora
Abstract:
Shipping containers are present in different Brazilian cities, firstly used for transportation purposes, but which become waste materials and an environmental burden in their end-of-life cycle. In the last decade, in Brazil, some buildings made partly or totally from shipping containers started to appear, most of them for commercial and office uses. Although the use of a reused container for buildings seems a sustainable solution, it is very important to measure the thermal and energy aspects when they are used as such. In this context, this study aims to evaluate the thermal and energy performance of an office building totally made from a 12-meter-long, High Cube 40’ shipping container in different Brazilian Bioclimatic Zones. Four different constructive solutions, mostly used in Brazil were chosen: (1) container without any covering; (2) with internally insulated drywall; (3) with external fiber cement boards; (4) with both drywall and fiber cement boards. For this, the DesignBuilder with EnergyPlus was used for the computational simulation in 8760 hours. The EnergyPlus Weather File (EPW) data of six Brazilian capital cities were considered: Curitiba, Sao Paulo, Brasilia, Campo Grande, Teresina and Rio de Janeiro. Air conditioning appliance (split) was adopted for the conditioned area and the cooling setpoint was fixed at 25°C. The coefficient of performance (CoP) of air conditioning equipment was set as 3.3. Three kinds of solar absorptances were verified: 0.3, 0.6 and 0.9 of exterior layer. The building in Teresina presented the highest level of energy consumption, while the one in Curitiba presented the lowest, with a wide range of differences in results. The constructive option of external fiber cement and drywall presented the best results, although the differences were not significant compared to the solution using just drywall. The choice of absorptance showed a great impact in energy consumption, mainly compared to the case of containers without any covering and for use in the hottest cities: Teresina, Rio de Janeiro, and Campo Grande. This study brings as the main contribution the discussion of constructive aspects for design guidelines for more energy-efficient container buildings, considering local climate differences, and helps the dissemination of this cleaner constructive practice in the Brazilian building sector.Keywords: bioclimatic zones, Brazil, shipping containers, thermal and energy performance
Procedia PDF Downloads 171597 Leadership and Corporate Social Responsibility: The Role of Spiritual Intelligence
Authors: Meghan E. Murray, Carri R. Tolmie
Abstract:
This study aims to identify potential factors and widely applicable best practices that can contribute to improving corporate social responsibility (CSR) and corporate performance for firms by exploring the relationship between transformational leadership, spiritual intelligence, and emotional intelligence. Corporate social responsibility is when companies are cognizant of the impact of their actions on the economy, their communities, the environment, and the world as a whole while executing business practices accordingly. The prevalence of CSR has continuously strengthened over the past few years and is now a common practice in the business world, with such efforts coinciding with what stakeholders and the public now expect from corporations. Because of this, it is extremely important to be able to pinpoint factors and best practices that can improve CSR within corporations. One potential factor that may lead to improved CSR is spiritual intelligence (SQ), or the ability to recognize and live with a purpose larger than oneself. Spiritual intelligence is a measurable skill, just like emotional intelligence (EQ), and can be improved through purposeful and targeted coaching. This research project consists of two studies. Study 1 is a case study comparison of a benefit corporation and a non-benefit corporation. This study will examine the role of SQ and EQ as moderators in the relationship between the transformational leadership of employees within each company and the perception of each firm’s CSR and corporate performance. Project methodology includes creating and administering a survey comprised of multiple pre-established scales on transformational leadership, spiritual intelligence, emotional intelligence, CSR, and corporate performance. Multiple regression analysis will be used to extract significant findings from the collected data. Study 2 will dive deeper into spiritual intelligence itself by analyzing pre-existing data and identifying key relationships that may provide value to companies and their stakeholders. This will be done by performing multiple regression analysis on anonymized data provided by Deep Change, a company that has created an advanced, proprietary system to measure spiritual intelligence. Based on the results of both studies, this research aims to uncover best practices, including the unique contribution of spiritual intelligence, that can be utilized by organizations to help enhance their corporate social responsibility. If it is found that high spiritual and emotional intelligence can positively impact CSR effort, then corporations will have a tangible way to enhance their CSR: providing targeted employees with training and coaching to increase their SQ and EQ.Keywords: corporate social responsibility, CSR, corporate performance, emotional intelligence, EQ, spiritual intelligence, SQ, transformational leadership
Procedia PDF Downloads 126596 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution
Authors: Dayane de Almeida
Abstract:
This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style
Procedia PDF Downloads 240595 Choice Analysis of Ground Access to São Paulo/Guarulhos International Airport Using Adaptive Choice-Based Conjoint Analysis (ACBC)
Authors: Carolina Silva Ansélmo
Abstract:
Airports are demand-generating poles that affect the flow of traffic around them. The airport access system must be fast, convenient, and adequately planned, considering its potential users. An airport with good ground access conditions can provide the user with a more satisfactory access experience. When several transport options are available, service providers must understand users' preferences and the expected quality of service. The present study focuses on airport access in a comparative scenario between bus, private vehicle, subway, taxi and urban mobility transport applications to São Paulo/Guarulhos International Airport. The objectives are (i) to identify the factors that influence the choice, (ii) to measure Willingness to Pay (WTP), and (iii) to estimate the market share for each modal. The applied method was Adaptive Choice-based Conjoint Analysis (ACBC) technique using Sawtooth Software. Conjoint analysis, rooted in Utility Theory, is a survey technique that quantifies the customer's perceived utility when choosing alternatives. Assessing user preferences provides insights into their priorities for product or service attributes. An additional advantage of conjoint analysis is its requirement for a smaller sample size compared to other methods. Furthermore, ACBC provides valuable insights into consumers' preferences, willingness to pay, and market dynamics, aiding strategic decision-making to provide a better customer experience, pricing, and market segmentation. In the present research, the ACBC questionnaire had the following variables: (i) access time to the boarding point, (ii) comfort in the vehicle, (iii) number of travelers together, (iv) price, (v) supply power, and (vi) type of vehicle. The case study questionnaire reached 213 valid responses considering the scenario of access from the São Paulo city center to São Paulo/Guarulhos International Airport. As a result, the price and the number of travelers are the most relevant attributes for the sample when choosing airport access. The market share of the selection is mainly urban mobility transport applications, followed by buses, private vehicles, taxis and subways.Keywords: adaptive choice-based conjoint analysis, ground access to airport, market share, willingness to pay
Procedia PDF Downloads 77594 Search of Сompounds with Antimicrobial and Antifungal Activity in the Series of 1-(2-(1H-Tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas
Authors: O. Antypenko, I. Vasilieva, S. Kovalenko
Abstract:
Investigations for new effective and less toxic antimicrobials agents are always up-to-date. The tetrazole derivatives are quite interesting objects as for synthesis as well as for pharmacological screening. Thus, some derivatives of tetrazole demonstrated antimicrobial activity, namely 5-phenyl-tetrazolo[1,5-c]quinazoline was effective one against Staphylococcus aureus and Esherichia faecalis (MIC = 250 mg/L). Besides, investigation of the 9-bromo(chloro)-5-morpholin(piperidine)-4-yl-tetrazolo[1,5-c]quinazoline’s antimicrobial activity against Esherichia coli and Enterococcus faecalis, Pseudomonas aeruginosa and Staphylococcus aureus revealed that sensitivity of Gram-positive bacteria to the compounds was higher than that of Gram-negative bacteria. So, our previously synthesized, 31 derivatives of 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas were decided to test for their in vitro antibacterial activity against Gram-positive bacteria (Staphylococcus aureus ATCC 25923, Enterobacter aerogenes, Enterococcus faecalis ATCC 29212), Gram-negative bacteria (Pseudomonas aeruginosa ATCC 9027, Escherichia coli ATCC 25922, Klebsiella pneumoniae 68) and antifungal properties against Candida albicans ATCC 885653. Agar-diffusion method was used for determination of the preliminary activity compared to well-known reference antimicrobials. All the compounds were dissolved in DMSO at a concentration of 100 μg/disk, using inhibition zone diameter (IZD, mm) as a measure for the antimicrobial activity. The most active turned to be 3 structures, that inhibited several bacterial strains: 1-ethyl-3-(5-fluoro-2-(1H-tetrazol-5-yl)phenyl)urea (1), 1-(4-bromo-2-(1H-tetrazol-5-yl)-phenyl)-3-(4-(trifluoromethyl)phenyl)urea (2) and 1-(4-chloro-2-(1H-tetrazol-5-yl)phenyl)-3-(3-(trifluoromethyl)phenyl)urea (3). IZM (mm) was 40 (Escherichia coli), 25 (Klebsiella pneumonia) for compound 1; 12 (Pseudomonas aeruginosa), 15 (Staphylococcus aureus), 10 (Enterococcus faecalis) for compound 2; 25 (Staphylococcus aureus), 15 (Enterococcus faecalis) for compound 3. The most sensitive to the activity of the substances were Gram-negative bacteria Pseudomonas aeruginosa. While none of compound effected on Candida albicans. Speaking about, reference drugs: Amikacin (30 µg/disk) showed 27 and Ceftazide (30 µg/disk) 25 against Pseudomonas aeruginosa. That is, unfortunately, higher than studied 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas. Obtained results will be used for further purposeful optimization of the leading compounds in the more effective antimicrobials because of the ever-mounting problem of microorganism’s resistance.Keywords: antimicrobial, antifungal, compounds, 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas
Procedia PDF Downloads 357593 Evaluating Radiation Dose for Interventional Radiologists Performing Spine Procedures
Authors: Kholood A. Baron
Abstract:
While radiologist numbers specialized in spine interventional procedures are limited in Kuwait, the number of patients demanding these procedures is increasing rapidly. Due to this high demand, the workload of radiologists is increasing, which might represent a radiation exposure concern. During these procedures, the doctor’s hands are in very close proximity to the main radiation beam/ if not within it. The aim of this study is to measure the radiation dose for radiologists during several interventional procedures for the spine. Methods: Two doctors carrying different workloads were included. (DR1) was performing procedures in the morning and afternoon shifts, while (DR2) was performing procedures in the morning shift only. Comparing the radiation exposures that the hand of each doctor is receiving will assess radiation safety and help to set up workload regulations for radiologists carrying a heavy schedule of such procedures. Entrance Skin Dose (ESD) was measured via TLD (ThermoLuminescent Dosimetry) placed at the right wrist of the radiologists. DR1 was covering the morning shift in one hospital (Mubarak Al-Kabeer Hospital) and the afternoon shift in another hospital (Dar Alshifa Hospital). The TLD chip was placed in his gloves during the 2 shifts for a whole week. Since DR2 was covering the morning shift only in Al Razi Hospital, he wore the TLD during the morning shift for a week. It is worth mentioning that DR1 was performing 4-5 spine procedures/day in the morning and the same number in the afternoon and DR2 was performing 5-7 procedures/day. This procedure was repeated for 4 consecutive weeks in order to calculate the ESD value that a hand receives in a month. Results: In general, radiation doses that the hand received in a week ranged from 0.12 to 1.12 mSv. The ESD values for DR1 for the four consecutive weeks were 1.12, 0.32, 0.83, 0.22 mSv, thus for a month (4 weeks), this equals 2.49 mSv and calculated to be 27.39 per year (11 months-since each radiologist have 45 days of leave in each year). For DR2, the weekly ESD values are 0.43, 0.74, 0.12, 0.61 mSv, and thus, for a month, this equals 1.9 mSv, and for a year, this equals 20.9 mSv /year. These values are below the standard level and way below the maximum limit of 500 mSv per year (set by ICRP = International Council of Radiation Protection). However, it is worth mentioning that DR1 was a senior consultant and hence needed less fluoro-time during each procedure. This is evident from the low ESD values of the second week (0.32) and the fourth week (0.22), even though he was performing nearly 10-12 procedures in a day /5 days a week. These values were lower or in the same range as those for DR2 (who was a junior consultant). This highlighted the importance of increasing the radiologist's skills and awareness of fluoroscopy time effect. In conclusion, the radiation dose that radiologists received during spine interventional radiology in our setting was below standard dose limits.Keywords: radiation protection, interventional radiology dosimetry, ESD measurements, radiologist radiation exposure
Procedia PDF Downloads 56592 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 103591 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19
Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello
Abstract:
Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.Keywords: facial action coding system, COVID-19, masks, facial analysis
Procedia PDF Downloads 75590 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 139