Search results for: linear density of reinforcement
573 Combining Nitrocarburisation and Dry Lubrication for Improving Component Lifetime
Authors: Kaushik Vaideeswaran, Jean Gobet, Patrick Margraf, Olha Sereda
Abstract:
Nitrocarburisation is a surface hardening technique often applied to improve the wear resistance of steel surfaces. It is considered to be a promising solution in comparison with other processes such as flame spraying, owing to the formation of a diffusion layer which provides mechanical integrity, as well as its cost-effectiveness. To improve other tribological properties of the surface such as the coefficient of friction (COF), dry lubricants are utilized. Currently, the lifetime of steel components in many applications using either of these techniques individually are faced with the limitations of the two: high COF for nitrocarburized surfaces and low wear resistance of dry lubricant coatings. To this end, the current study involves the creation of a hybrid surface using the impregnation of a dry lubricant on to a nitrocarburized surface. The mechanical strength and hardness of Gerster SA’s nitrocarburized surfaces accompanied by the impregnation of the porous outermost layer with a solid lubricant will create a hybrid surface possessing both outstanding wear resistance and a low friction coefficient and with high adherence to the substrate. Gerster SA has the state-of-the-art technology for the surface hardening of various steels. Through their expertise in the field, the nitrocarburizing process parameters (atmosphere, temperature, dwelling time) were optimized to obtain samples that have a distinct porous structure (in terms of size, shape, and density) as observed by metallographic and microscopic analyses. The porosity thus obtained is suitable for the impregnation of a dry lubricant. A commercially available dry lubricant with a thermoplastic matrix was employed for the impregnation process, which was optimized to obtain a void-free interface with the surface of the nitrocarburized layer (henceforth called hybrid surface). In parallel, metallic samples without nitrocarburisation were also impregnated with the same dry lubricant as a reference (henceforth called reference surface). The reference and the nitrocarburized surfaces, with and without the dry lubricant were tested for their tribological behavior by sliding against a quenched steel ball using a nanotribometer. Without any lubricant, the nitrocarburized surface showed a wear rate 5x lower than the reference metal. In the presence of a thin film of dry lubricant ( < 2 micrometers) and under the application of high loads (500 mN or ~800 MPa), while the COF for the reference surface increased from ~0.1 to > 0.3 within 120 m, the hybrid surface retained a COF < 0.2 for over 400m of sliding. In addition, while the steel ball sliding against the reference surface showed heavy wear, the corresponding ball sliding against the hybrid surface showed very limited wear. Observations of the sliding tracks in the hybrid surface using Electron Microscopy show the presence of the nitrocarburized nodules as well as the lubricant, whereas no traces of the lubricant were found in the sliding track on the reference surface. In this manner, the clear advantage of combining nitrocarburisation with the impregnation of a dry lubricant towards forming a hybrid surface has been demonstrated.Keywords: dry lubrication, hybrid surfaces, improved wear resistance, nitrocarburisation, steels
Procedia PDF Downloads 122572 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival
Procedia PDF Downloads 341571 Physical Model Testing of Storm-Driven Wave Impact Loads and Scour at a Beach Seawall
Authors: Sylvain Perrin, Thomas Saillour
Abstract:
The Grande-Motte port and seafront development project on the French Mediterranean coastline entailed evaluating wave impact loads (pressures and forces) on the new beach seawall and comparing the resulting scour potential at the base of the existing and new seawall. A physical model was built at ARTELIA’s hydraulics laboratory in Grenoble (France) to provide insight into the evolution of scouring overtime at the front of the wall, quasi-static and impulsive wave force intensity and distribution on the wall, and water and sand overtopping discharges over the wall. The beach was constituted of fine sand and approximately 50 m wide above mean sea level (MSL). Seabed slopes were in the range of 0.5% offshore to 1.5% closer to the beach. A smooth concrete structure will replace the existing concrete seawall with an elevated curved crown wall. Prior the start of breaking (at -7 m MSL contour), storm-driven maximum spectral significant wave heights of 2.8 m and 3.2 m were estimated for the benchmark historical storm event dated of 1997 and the 50-year return period storms respectively, resulting in 1 m high waves at the beach. For the wave load assessment, a tensor scale measured wave forces and moments and five piezo / piezo-resistive pressure sensors were placed on the wall. Light-weight sediment physical model and pressure and force measurements were performed with scale 1:18. The polyvinyl chloride light-weight particles used to model the prototype silty sand had a density of approximately 1 400 kg/m3 and a median diameter (d50) of 0.3 mm. Quantitative assessments of the seabed evolution were made using a measuring rod and also a laser scan survey. Testing demonstrated the occurrence of numerous impulsive wave impacts on the reflector (22%), induced not by direct wave breaking but mostly by wave run-up slamming on the top curved part of the wall. Wave forces of up to 264 kilonewtons and impulsive pressure spikes of up to 127 kilonewtons were measured. Maximum scour of -0.9 m was measured for the new seawall versus -0.6 m for the existing seawall, which is imputable to increased wave reflection (coefficient was 25.7 - 30.4% vs 23.4 - 28.6%). This paper presents a methodology for the setup and operation of a physical model in order to assess the hydrodynamic and morphodynamic processes at a beach seawall during storms events. It discusses the pros and cons of such methodology versus others, notably regarding structures peculiarities and model effects.Keywords: beach, impacts, scour, seawall, waves
Procedia PDF Downloads 153570 Numerical Investigation of Phase Change Materials (PCM) Solidification in a Finned Rectangular Heat Exchanger
Authors: Mounir Baccar, Imen Jmal
Abstract:
Because of the rise in energy costs, thermal storage systems designed for the heating and cooling of buildings are becoming increasingly important. Energy storage can not only reduce the time or rate mismatch between energy supply and demand but also plays an important role in energy conservation. One of the most preferable storage techniques is the Latent Heat Thermal Energy Storage (LHTES) by Phase Change Materials (PCM) due to its important energy storage density and isothermal storage process. This paper presents a numerical study of the solidification of a PCM (paraffin RT27) in a rectangular thermal storage exchanger for air conditioning systems taking into account the presence of natural convection. Resolution of continuity, momentum and thermal energy equations are treated by the finite volume method. The main objective of this numerical approach is to study the effect of natural convection on the PCM solidification time and the impact of fins number on heat transfer enhancement. It also aims at investigating the temporal evolution of PCM solidification, as well as the longitudinal profiles of the HTF circling in the duct. The present research undertakes the study of two cases: the first one treats the solidification of PCM in a PCM-air heat exchanger without fins, while the second focuses on the solidification of PCM in a heat exchanger of the same type with the addition of fins (3 fins, 5 fins, and 9 fins). Without fins, the stratification of the PCM from colder to hotter during the heat transfer process has been noted. This behavior prevents the formation of thermo-convective cells in PCM area and then makes transferring almost conductive. In the presence of fins, energy extraction from PCM to airflow occurs at a faster rate, which contributes to the reduction of the discharging time and the increase of the outlet air temperature (HTF). However, for a great number of fins (9 fins), the enhancement of the solidification process is not significant because of the effect of confinement of PCM liquid spaces for the development of thermo-convective flow. Hence, it can be concluded that the effect of natural convection is not very significant for a high number of fins. In the optimum case, using 3 fins, the increasing temperature of the HTF exceeds approximately 10°C during the first 30 minutes. When solidification progresses from the surfaces of the PCM-container and propagates to the central liquid phase, an insulating layer will be created in the vicinity of the container surfaces and the fins, causing a low heat exchange rate between PCM and air. As the solid PCM layer gets thicker, a progressive regression of the field of movements is induced in the liquid phase, thus leading to the inhibition of heat extraction process. After about 2 hours, 68% of the PCM became solid, and heat transfer was almost dominated by conduction mechanism.Keywords: heat transfer enhancement, front solidification, PCM, natural convection
Procedia PDF Downloads 188569 Role of Artificial Intelligence in Nano Proteomics
Authors: Mehrnaz Mostafavi
Abstract:
Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence
Procedia PDF Downloads 102568 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line
Procedia PDF Downloads 179567 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 594566 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 77565 Automatic Differentiation of Ultrasonic Images of Cystic and Solid Breast Lesions
Authors: Dmitry V. Pasynkov, Ivan A. Egoshin, Alexey A. Kolchev, Ivan V. Kliouchkin
Abstract:
In most cases, typical cysts are easily recognized at ultrasonography. The specificity of this method for typical cysts reaches 98%, and it is usually considered as gold standard for typical cyst diagnosis. However, it is necessary to have all the following features to conclude the typical cyst: clear margin, the absence of internal echoes and dorsal acoustic enhancement. At the same time, not every breast cyst is typical. It is especially characteristic for protein-contained cysts that may have significant internal echoes. On the other hand, some solid lesions (predominantly malignant) may have cystic appearance and may be falsely accepted as cysts. Therefore we tried to develop the automatic method of cystic and solid breast lesions differentiation. Materials and methods. The input data were the ultrasonography digital images with the 256-gradations of gray color (Medison SA8000SE, Siemens X150, Esaote MyLab C). Identification of the lesion on these images was performed in two steps. On the first one, the region of interest (or contour of lesion) was searched and selected. Selection of such region is carried out using the sigmoid filter where the threshold is calculated according to the empirical distribution function of the image brightness and, if necessary, it was corrected according to the average brightness of the image points which have the highest gradient of brightness. At the second step, the identification of the selected region to one of lesion groups by its statistical characteristics of brightness distribution was made. The following characteristics were used: entropy, coefficients of the linear and polynomial regression, quantiles of different orders, an average gradient of brightness, etc. For determination of decisive criterion of belonging to one of lesion groups (cystic or solid) the training set of these characteristics of brightness distribution separately for benign and malignant lesions were received. To test our approach we used a set of 217 ultrasonic images of 107 cystic (including 53 atypical, difficult for bare eye differentiation) and 110 solid lesions. All lesions were cytologically and/or histologically confirmed. Visual identification was performed by trained specialist in breast ultrasonography. Results. Our system correctly distinguished all (107, 100%) typical cysts, 107 of 110 (97.3%) solid lesions and 50 of 53 (94.3%) atypical cysts. On the contrary, with the bare eye it was possible to identify correctly all (107, 100%) typical cysts, 96 of 110 (87.3%) solid lesions and 32 of 53 (60.4%) atypical cysts. Conclusion. Automatic approach significantly surpasses the visual assessment performed by trained specialist. The difference is especially large for atypical cysts and hypoechoic solid lesions with the clear margin. This data may have a clinical significance.Keywords: breast cyst, breast solid lesion, differentiation, ultrasonography
Procedia PDF Downloads 272564 Provisional Settlements and Urban Resilience: The Transformation of Refugee Camps into Cities
Authors: Hind Alshoubaki
Abstract:
The world is now confronting a widespread urban phenomenon: refugee camps, which have mostly been established in ‘rushing mode,’ pointing toward affording temporary settlements for refugees that provide them with minimum levels of safety, security and protection from harsh weather conditions within a very short time period. In fact, those emergency settlements are transforming into permanent ones since time is a decisive factor in terms of construction and camps’ age. These play an essential role in transforming their temporary character into a permanent one that generates deep modifications to the city’s territorial structure, shaping a new identity and creating a contentious change in the city’s form and history. To achieve a better understanding for the transformation of refugee camps, this study is based on a mixed-methods approach: the qualitative approach explores different refugee camps and analyzes their transformation process in terms of population density and the changes to the city’s territorial structure and urban features. The quantitative approach employs a statistical regression analysis as a reliable prediction of refugees’ satisfaction within the Zaatari camp in order to predict its future transformation. Obviously, refugees’ perceptions of their current conditions will affect their satisfaction, which plays an essential role in transforming emergency settlements into permanent cities over time. The test basically discusses five main themes: the access and readiness of schools, the dispersion of clinics and shopping centers; the camp infrastructure, the construction materials, and the street networks. The statistical analysis showed that Syrian refugees were not satisfied with their current conditions inside the Zaatari refugee camp and that they had started implementing changes according to their needs, desires, and aspirations because they are conscious about the fact of their prolonged stay in this settlement. Also, the case study analyses showed that neglecting the fact that construction takes time leads settlements being created with below-minimum standards that are deteriorating and creating ‘slums,’ which lead to increased crime rates, suicide, drug use and diseases and deeply affect cities’ urban tissues. For this reason, recognizing the ‘temporary-eternal’ character of those settlements is the fundamental concept to consider refugee camps from the beginning as definite permanent cities. This is the key factor to minimize the trauma of displacement on both refugees and the hosting countries. Since providing emergency settlements within a short time period does not mean using temporary materials, having a provisional character or creating ‘makeshift cities.’Keywords: refugee, refugee camp, temporary, Zaatari
Procedia PDF Downloads 134563 Microplastics Accumulation and Abundance Standardization for Fluvial Sediments: Case Study for the Tena River
Authors: Mishell E. Cabrera, Bryan G. Valencia, Anderson I. Guamán
Abstract:
Human dependence on plastic products has led to global pollution, with plastic particles ranging in size from 0.001 to 5 millimeters, which are called microplastics (hereafter, MPs). The abundance of microplastics is used as an indicator of pollution. However, reports of pollution (abundance of MPs) in river sediments do not consider that the accumulation of sediments and MPs depends on the energy of the river. That is, the abundance of microplastics will be underestimated if the sediments analyzed come from places where the river flows with a lot of energy, and the abundance will be overestimated if the sediment analyzed comes from places where the river flows with less energy. This bias can generate an error greater than 300% of the MPs value reported for the same river and should increase when comparisons are made between 2 rivers with different characteristics. Sections where the river flows with higher energy allow sands to be deposited and limit the accumulation of MPs, while sections, where the same river has lower energy, allow fine sediments such as clays and silts to be deposited and should facilitate the accumulation of MPs particles. That is, the abundance of MPs in the same river is underrepresented when the sediment analyzed is sand, and the abundance of MPs is overrepresented if the sediment analyzed is silt or clay. The present investigation establishes a protocol aimed at incorporating sample granulometry to calibrate MPs quantification and eliminate over- or under-representation bias (hereafter granulometric bias). A total of 30 samples were collected by taking five samples within six work zones. The slope of the sampling points was less than 8 degrees, referred to as low slope areas, according to the Van Zuidam slope classification. During sampling, blanks were used to estimate possible contamination by MPs during sampling. Samples were dried at 60 degrees Celsius for three days. A flotation technique was employed to isolate the MPs using sodium metatungstate with a density of 2 gm/l. For organic matter digestion, 30% hydrogen peroxide and Fenton were used at a ratio of 6:1 for 24 hours. The samples were stained with rose bengal at a concentration of 200 mg/L and were subsequently dried in an oven at 60 degrees Celsius for 1 hour to be identified and photographed in a stereomicroscope with the following conditions: Eyepiece magnification: 10x, Zoom magnification (zoom knob): 4x, Objective lens magnification: 0.35x for analysis in ImageJ. A total of 630 fibers of MPs were identified, mainly red, black, blue, and transparent colors, with an overall average length of 474,310 µm and an overall median length of 368,474 µm. The particle size of the 30 samples was calculated using 100 g per sample using sieves with the following apertures: 2 mm, 1 mm, 500 µm, 250 µm, 125 µm and 0.63 µm. This sieving allowed a visual evaluation and a more precise quantification of the microplastics present. At the same time, the weight of sediment in each fraction was calculated, revealing an evident magnitude: as the presence of sediment in the < 63 µm fraction increases, a significant increase in the number of MPs particles is observed.Keywords: microplastics, pollution, sediments, Tena River
Procedia PDF Downloads 73562 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 124561 Composition, Velocity, and Mass of Projectiles Generated from a Chain Shot Event
Authors: Eric Shannon, Mark J. McGuire, John P. Parmigiani
Abstract:
A hazard associated with the use of timber harvesters is chain shot. Harvester saw chain is subjected to large dynamic mechanical stresses which can cause it to fracture. The resulting open loop of saw chain can fracture a second time and create a projectile consisting of several saw-chain links referred to as a chain shot. Its high kinetic energy enables it to penetrate operator enclosures and be a significant hazard. Accurate data on projectile composition, mass, and speed are needed for the design of both operator enclosures resistant to projectile penetration and for saw chain resistant to fracture. The work presented here contributes to providing this data through the use of a test machine designed and built at Oregon State University. The machine’s enclosure is a standard shipping container. To safely contain any anticipated chain shot, the container was lined with both 9.5 mm AR500 steel plates and 50 mm high-density polyethylene (HDPE). During normal operation, projectiles are captured virtually undamaged in the HDPE enabling subsequent analysis. Standard harvester components are used for bar mounting and chain tensioning. Standard guide bars and saw chains are used. An electric motor with flywheel drives the system. Testing procedures follow ISO Standard 11837. Chain speed at break was approximately 45.5 m/s. Data was collected using both a 75 cm solid bar (Oregon 752HSFB149) and 90 cm solid bar (Oregon 902HSFB149). Saw chains used were 89 Drive Link .404”-18HX loops made from factory spools. Standard 16-tooth sprockets were used. Projectile speed was measured using both a high-speed camera and a chronograph. Both rotational and translational kinetic energy are calculated. For this study 50 chain shot events were executed. Results showed that projectiles consisted of a variety combinations of drive links, tie straps, and cutter links. Most common (occurring in 60% of the events) was a drive-link / tie-strap / drive-link combination having a mass of approximately 10.33 g. Projectile mass varied from a minimum of 2.99 g corresponding to a drive link only to a maximum of 18.91 g corresponding to a drive-link / tie-strap / drive-link / cutter-link / drive-link combination. Projectile translational speed was measured to be approximately 270 m/s and rotational speed of approximately 14000 r/s. The calculated translational and rotational kinetic energy magnitudes each average over 600 J. This study provides useful information for both timber harvester manufacturers and saw chain manufacturers to design products that reduce the hazards associated with timber harvesting.Keywords: chain shot, timber harvesters, safety, testing
Procedia PDF Downloads 147560 Digital Survey to Detect Factors That Determine Successful Implementation of Cooperative Learning in Physical Education
Authors: Carolin Schulze
Abstract:
Characterized by a positive interdependence of learners, cooperative learning (CL) is one possibility of successfully dealing with the increasing heterogeneity of students. Various positive effects of CL on the mental, physical and social health of students have already been documented. However, this structure is still rarely used in physical education (PE). Moreover, there is a lack of information about factors that determine the successful implementation of CL in PE. Therefore, the objective of the current study was to find out factors that determine the successful implementation of CL in PE using a digital questionnaire that was conducted from November to December 2022. In addition to socio-demographic data (age, gender, teaching experience, and education level), frequency of using CL, implementation strategies (theory-led, student-centred), and positive and negative effects of CL were measured. Furthermore, teachers were asked to rate the success of implementation on a 6-point rating scale (1-very successful to 6-not successful at all). For statistical analysis, multiple linear regression was performed, setting the success of implementation as the dependent variable. A total of 224 teachers (mean age=44.81±10.60 years; 58% male) took part in the current study. Overall, 39% of participants stated that they never use CL in their PE classes. Main reasons against the implementations of CL in PE were no time for preparation (74%) or for implementation (61%) and high heterogeneity of students (55%). When using CL, most of the reported difficulties are related to uncertainties about the correct procedure (54%) and the heterogeneous performance of students (54%). The most frequently mentioned positive effect was increased motivation of students (42%) followed by an improvement of psychological abilities (e.g. self-esteem, self-concept; 36%) and improved class cohesion (31%). Reported negative effects were unpredictability (29%), restlessness (24%), confusion (24%), and conflicts between students (17%). The successful use of CL is related to a theory-based preparation (e.g., heterogeneous formation of groups, use of rules and rituals) and a flexible implementation tailored to the needs and conditions of students (e.g., the possibility of individual work, omission of CL phases). Compared to teachers who solely implemented CL theory-led or student-adapted, teachers who switched from theory-led preparation to student-centred implementation of CL reported more successful implementation (t=5.312; p<.001). Neither frequency of using CL in PE nor the gender, age, the teaching experience, or the education level of the teacher showed a significant connection with the successful use of CL. Corresponding to the results of the current study, it is advisable that teachers gather enough knowledge about CL during their education and to point out the need to adapt the learning structure according to the diversity of their students. In order to analyse implementation strategies of teachers more deeply, qualitative methods and guided interviews with teachers are needed.Keywords: diversity, educational technology, physical education, teaching styles
Procedia PDF Downloads 81559 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator
Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase
Abstract:
In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging
Procedia PDF Downloads 178558 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China
Abstract:
With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture
Procedia PDF Downloads 167557 Trends in Preoperative Self-Disclosure of Cannabis Use in Adult and Adolescent Orthopedic Surgical Patients: An Institutional Retrospective Study
Authors: Spencer Liu, William Chan, Marlena Komatz, Tommy Ramos, Mark Trentalange, Faye Rim, Dae Kim, Mary Kelly, Samuel Schuessler, Roberta Stack, Justas Lauzadis, Kathryn DelPizzo, Seth Waldman, Alexandra Sideris
Abstract:
Background & Significance: The increasing prevalence of cannabis use in the United States has important safety considerations in the perioperative setting, as chronic or heavy preoperative cannabis use may increase the risk of intraoperative complications, postoperative nausea and vomiting (PONV), increased postoperative pain levels, and acute side effects associated with cannabis use cessation. In this retrospective chart review study, we sought to determine the prevalence of self-reported cannabis use in the past 5-years at a single institution in New York City. We hypothesized that there is an increasing prevalence of preoperative self-reported cannabis use among adult and adolescent patients undergoing orthopedic surgery. Methods: After IRB approval for this retrospective study, surgical cases performed on patients 12 years of age and older at the hospital’s main campus and two ambulatory surgery centers between January 1st, 2018, and December 31st, 2023, with preoperatively self-disclosed cannabis use entered in the social history intake form were identified using the tool SlicerDicer in Epic. Case and patient characteristics were extracted, and trends in utilization over time were assessed by the Cochran-Armitage trend test. Results: Overall, the prevalence of self-reported cannabis use increased from 6.6% in 2018 to 10.6% in 2023. By age group, the prevalence of self-reported cannabis use among adolescents remained consistently low (2018: 2.6%, 2023: 2.6%) but increased with significant evidence for a linear trend (p < 0.05) within every adult age group. Among adults, patients who were 18-24 years old (2018: 18%, 2023: 20.5%) and 25-34 years old (2018: 15.9%, 2023: 24.2%) had the highest prevalences of disclosure, whereas patients who were 75 years of age or older had the lowest prevalence of disclosure (2018: 1.9%, 2023: 4.6%). Patients who were 25-34 years old had the highest percent difference in disclosure rates of 8.3%, which corresponded to a 52.2% increase from 2018 to 2023. The adult age group with the highest percent change was patients who were 75 years of age or older, with a difference of 2.7%, which corresponded to a 142.1% increase from 2018 to 2023. Conclusions: These trends in preoperative self-reported cannabis use among patients undergoing orthopedic surgery have important implications for perioperative care and clinical outcomes. Efforts are underway to refine and standardize cannabis use data capture at our institution.Keywords: orthopedic surgery, cannabis, postoperative pain, postoperative nausea
Procedia PDF Downloads 47556 Oxidative Damage to Lipids, Proteins, and DNA during Differentiation of Mesenchymal Stem Cells Derived from Umbilical Cord into Biologically Active Hepatocytes
Authors: Abdolamir Allameh, Shahnaz Esmaeili, Mina Allameh, Safoura Khajeniazi
Abstract:
Stem cells with therapeutic applications can be isolated from human placenta/umblical cord blood (UCB) as well as the cord tissue (UC). Stem cells in culture are vulnerable to oxidative stress, particularly when subjected to differentiation process. The aim of this study was to examine the chnages in the rate of oxidation that occurs to cellular macromolecules during hepatic differentiation of mononuclear cells (MSCs). In addition, the impact of the hepatic differentiation process of MSC on cellular and biological activity of the cells will be undertaken. For this purpose, first mononuclear cells (MNCs) were isolated from human UCB which was obtained from a healthy full-term infant. The cells were cultured at a density of 3×10⁵ cells/cm² in DMEM- low-glucose culture media supplemented with 20% FBS, 2 mM L-glutamine, 100 μg/ml streptomycin and 100 U/ml penicillin. Cell cultures were then incubated at 37°C in a humidified 5% CO₂ incubator. After removing non-adherent cells by replacing culture medium, fibroblast-like adherent cells were resuspended in 0.25% trypsin-EDTA and plated in 25 cm² flasks (1×10⁴/ml). Characterization of the MSCs was routinely done by observing their morphology and growth curve. MSCs were subjected to a 2-step hepatocyte differentiation protocol in presence of hepatocyte growth factor (HGF), dexamethazone (DEX) and oncostatin M (OSM). The hepatocyte-like cells derived from MSCs were checked every week for 3 weeks for changes in lipid peroxidation, protein carbonyl formation and DNA oxidation i.e., 8-hydroxy-2'-deoxyguanosine (8-OH-dG) assay. During the 3-week differentiation process of MSCs to hepatocyte-like cells we found that expression liver-specific markers such as albumin, was associated with increased levels of lipid peroxidation and protein carbonyl formation. Whereas, undifferentiated MSCs has relatively low levels of lipid peroxidation products. There was a significant increase ( p < 0.05) in lipid peroxidation products in hepatocytes on days 7, 14, and 21 of differentiation. Likewise, the level of protein carbonyls in the cells was elevated during the differentiation. The level of protein carbonyls measured in hepatocyte-like cells obtained 3 weeks after differentiation induction was estimated to be ~6 fold higher compared to cells recovered on day 7 of differentiation. On the contrary, there was a small but significant decrease in DNA damage marker (8-OH-dG) in hepatocytes recovered 3 weeks after differentiation onset. The level of 8-OHdG which was in consistent with formation of reactive oxygen species (ROS). In conclusion, this data suggest that despite the elevation in oxidation of lipid and protein molecules during hepatocyte development, the cells were normal in terms of DNA integrity, morphology, and biologically activity.Keywords: adult stem cells, DNA integrity, free radicals, hepatic differentiation
Procedia PDF Downloads 150555 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers
Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal
Abstract:
Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test
Procedia PDF Downloads 99554 Effects of Potential Chloride-Free Admixtures on Selected Mechanical Properties of Kenya Clay-Based Cement Mortars
Authors: Joseph Mwiti Marangu, Joseph Karanja Thiong'o, Jackson Muthengia Wachira
Abstract:
The mechanical performance of hydrated cements mortars mainly depends on its compressive strength and setting time. These properties are crucial in the construction industry. Pozzolana based cements are mostly characterized by low 28 day compressive strength and long setting times. These are some of the major impediments to their production and diverse uses despite numerous technological and environmental benefits associated with them. The study investigated the effects of potential chemical activators on calcined clay- Portland cement blends with an aim to achieve high early compressive strength and shorter setting times in cement mortar. In addition, standard consistency, soundness and insoluble residue of all cement categories was determined. The test cement was made by blending calcined clays with Ordinary Portland Cement (OPC) at replacement levels from 35 to 50 percent by mass of the OPC to make test cement labeled PCC for the purposes of this study. Mortar prisms measuring 40mmx40mmx160mm were prepared and cured in accordance with KS EAS 148-3:2000 standard. Solutions of Na2SO4, NaOH, Na2SiO3 and Na2CO3 containing 0.5- 2.5M were separately added during casting. Compressive strength was determined at 2rd, 7th, 28th and 90th day of curing. For comparison purposes, commercial Portland Pozzolana cement (PPC) and Ordinary Portland Cement (OPC) were also investigated without activators under similar conditions. X-Ray Florescence (XRF) was used for chemical analysis while X-Ray Diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FTIR) were used for mineralogical analysis of the test samples. The results indicated that addition of activators significantly increased the 2nd and 7th day compressive strength but minimal increase on the 28th and 90th day compressive strength. A relatively linear relationship was observed between compressive strength and concentration of activator solutions up to 28th of curing. Addition of the said activators significantly reduced both initial and final setting time. Standard consistency and soundness varied with increased amount of clay in the test cement and concentration of activators. Amount of insoluble residues increased with increased replacement of OPC with calcined clays. Mineralogical studies showed that N-A-S-H is formed in addition to C-S-H. In conclusion, the concentration of 2 molar for all activator solutions produced the optimum compressive strength and greatly reduced the setting times for all cement mortars.Keywords: activators, admixture, cement, clay, pozzolana
Procedia PDF Downloads 265553 Dengue Prevention and Control in Kaohsiung City
Authors: Chiu-Wen Chang, I-Yun Chang, Wei-Ting Chen, Hui-Ping Ho, Ruei-Hun Chang, Joh-Jong Huang
Abstract:
Kaohsiung City is located in the tropical region where has Aedes aegypti and Aedes albopictus distributed; once the virus invades, it’s can easily trigger local epidemic. Besides, Kaohsiung City has a world-class airport and harbor, trade and tourism are close and frequently with every country, especially with the Southeast Asian countries which also suffer from dengue. Therefore, Kaohsiung City faces the difficult challenge of dengue every year. The objectives of this study was to enhance dengue clinical care, border management and vector surveillance in Kaohsiung City by establishing an larger scale, innovatively and more coordinated dengue prevention and control strategies in 2016, including (1) Integrated medical programs: facilitated 657 contract medical institutions, widely set up NS1 rapid test in clinics, enhanced triage and referrals system, dengue case daily-monitoring management (2) Border quarantine: comprehensive NS1 screening for foreign workers and fisheries when immigration, hospitalization and isolation for suspected cases, health education for high risk groups (foreign students, other tourists) (3) Mosquito control: Widely use Gravitrap to monitor mosquito density in environment, use NS1 rapid screening test to detect community dengue virus (4) Health education: create a dengue app for people to immediately inquire the risk map and nearby medical resources, routine health education to all districts to strengthen public’s dengue knowledge, neighborhood cleaning awards program. The results showed that after new integration of dengue prevention and control strategies fully implemented in Kaohsiung City, the number of confirmed cases in 2016 declined to 342 cases, the majority of these cases are the continuation epidemic in 2015; in fact, only two cases confirmed after the 2016 summer. Besides, the dengue mortality rate successfully decreased to 0% in 2016. Moreover, according to the reporting rate from medical institutions in 2014 and 2016, it dropped from 27.07% to 19.45% from medical center, and it decreased from 36.55% to 29.79% from regional hospital; however, the reporting rate of district hospital increased from 11.88% to 15.87% and also increased from 24.51% to 34.89% in general practice clinics. Obviously, it showed that under the action of strengthening medical management, it reduced the medical center’s notification ratio and improved the notification ratio of general clinics which achieved the great effect of dengue clinical management and dengue control.Keywords: dengue control, integrated control strategies, clinical management, NS1
Procedia PDF Downloads 271552 Multicollinearity and MRA in Sustainability: Application of the Raise Regression
Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez
Abstract:
Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.Keywords: multicollinearity, MRA, interaction, raise
Procedia PDF Downloads 107551 Comparative Analysis of the Expansion Rate and Soil Erodibility Factor (K) of Some Gullies in Nnewi and Nnobi, Anambra State Southeastern Nigeria
Authors: Nzereogu Stella Kosi, Igwe Ogbonnaya, Emeh Chukwuebuka Odinaka
Abstract:
A comparative analysis of the expansion rate and soil erodibility of some gullies in Nnewi and Nnobi both of Nanka Formation were studied. The study involved an integration of field observations, geotechnical analysis, slope stability analysis, multivariate statistical analysis, gully expansion rate analysis, and determination of the soil erodibility factor (K) from Revised Universal Soil Loss Equation (RUSLE). Fifteen representative gullies were studied extensively, and results reveal that the geotechnical properties of the soil, topography, vegetation cover, rainfall intensity, and the anthropogenic activities in the study area were major factors propagating and influencing the erodibility of the soils. The specific gravity of the soils ranged from 2.45-2.66 and 2.54-2.78 for Nnewi and Nnobi, respectively. Grain size distribution analysis revealed that the soils are composed of gravel (5.77-17.67%), sand (79.90-91.01%), and fines (2.36-4.05%) for Nnewi and gravel (7.01-13.65%), sand (82.47-88.67%), and fines (3.78-5.02%) for Nnobi. The soils are moderately permeable with values ranging from 2.92 x 10-5 - 6.80 x 10-4 m/sec and 2.35 x 10-6 - 3.84 x 10⁻⁴m/sec for Nnewi and Nnobi respectively. All have low cohesion values ranging from 1–5kPa and 2-5kPa and internal friction angle ranging from 29-38° and 30-34° for Nnewi and Nnobi, respectively, which suggests that the soils have low shear strength and are susceptible to shear failure. Furthermore, the compaction test revealed that the soils were loose and easily erodible with values of maximum dry density (MDD) and optimum moisture content (OMC) ranging from 1.82-2.11g/cm³ and 8.20-17.81% for Nnewi and 1.98-2.13g/cm³ and 6.00-17.80% respectively. The plasticity index (PI) of the fines showed that they are nonplastic to low plastic soils and highly liquefiable with values ranging from 0-10% and 0-9% for Nnewi and Nnobi, respectively. Multivariate statistical analyses were used to establish relationship among the determined parameters. Slope stability analysis gave factor of safety (FoS) values in the range of 0.50-0.76 and 0.82-0.95 for saturated condition and 0.73-0.98 and 0.87-1.04 for unsaturated condition for both Nnewi and Nnobi, respectively indicating that the slopes are generally unstable to critically stable. The erosion expansion rate analysis for a fifteen-year period (2005-2020) revealed an average longitudinal expansion rate of 36.05m/yr, 10.76m/yr, and 183m/yr for Nnewi, Nnobi, and Nanka type gullies, respectively. The soil erodibility factor (K) are 8.57x10⁻² and 1.62x10-4 for Nnewi and Nnobi, respectively, indicating that the soils in Nnewi have higher erodibility potentials than those of Nnobi. From the study, both the Nnewi and Nnobi areas are highly prone to erosion. However, based on the relatively lower fine content of the soil, relatively lower topography, steeper slope angle, and sparsely vegetated terrain in Nnewi, soil erodibility and gully intensity are more profound in Nnewi than Nnobi.Keywords: soil erodibility, gully expansion, nnewi-nnobi, slope stability, factor of safety
Procedia PDF Downloads 130550 Application of Nanoparticles on Surface of Commercial Carbon-Based Adsorbent for Removal of Contaminants from Water
Authors: Ahmad Kayvani Fard, Gordon Mckay, Muataz Hussien
Abstract:
Adsorption/sorption is believed to be one of the optimal processes for the removal of heavy metals from water due to its low operational and capital cost as well as its high removal efficiency. Different materials have been reported in literature as adsorbent for heavy metal removal in waste water such as natural sorbents, organic polymers (synthetic) and mineral materials (inorganic). The selection of adsorbents and development of new functional materials that can achieve good removal of heavy metals from water is an important practice and depends on many factors, such as the availability of the material, cost of material, and material safety and etc. In this study we reported the synthesis of doped Activated carbon and Carbon nanotube (CNT) with different loading of metal oxide nanoparticles such as Fe2O3, Fe3O4, Al2O3, TiO2, SiO2 and Ag nanoparticles and their application in removal of heavy metals, hydrocarbon, and organics from waste water. Commercial AC and CNT with different loadings of mentioned nanoparticle were prepared and effect of pH, adsorbent dosage, sorption kinetic, and concentration effects are studied and optimum condition for removal of heavy metals from water is reported. The prepared composite sorbent is characterized using field emission scanning electron microscopy (FE-SEM), high transmission electron microscopy (HR-TEM), thermogravimetric analysis (TGA), X-ray diffractometer (XRD), the Brunauer, Emmett and Teller (BET) nitrogen adsorption technique, and Zeta potential. The composite materials showed higher removal efficiency and superior adsorption capacity compared to commercially available carbon based adsorbent. The specific surface area of AC increased by 50% reaching up to 2000 m2/g while the CNT specific surface area of CNT increased by more than 8 times reaching value of 890 m2/g. The increased surface area is one of the key parameters along with surface charge of the material determining the removal efficiency and removal efficiency. Moreover, the surface charge density of the impregnated CNT and AC have enhanced significantly where can benefit the adsorption process. The nanoparticles also enhance the catalytic activity of material and reduce the agglomeration and aggregation of material which provides more active site for adsorbing the contaminant from water. Some of the results for treating wastewater includes 100% removal of BTEX, arsenic, strontium, barium, phenolic compounds, and oil from water. The results obtained are promising for the use of AC and CNT loaded with metal oxide nanoparticle in treatment and pretreatment of waste water and produced water before desalination process. Adsorption can be very efficient with low energy consumption and economic feasibility.Keywords: carbon nanotube, activated carbon, adsorption, heavy metal, water treatment
Procedia PDF Downloads 234549 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle
Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha
Abstract:
An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe
Procedia PDF Downloads 242548 Real-Time Quantitative Polymerase Chain Reaction Assay for the Detection of microRNAs Using Bi-Directional Extension Sequences
Authors: Kyung Jin Kim, Jiwon Kwak, Jae-Hoon Lee, Soo Suk Lee
Abstract:
MicroRNAs (miRNA) are a class of endogenous, single-stranded, small, and non-protein coding RNA molecules typically 20-25 nucleotides long. They are thought to regulate the expression of other genes in a broad range by binding to 3’- untranslated regions (3’-UTRs) of specific mRNAs. The detection of miRNAs is very important for understanding of the function of these molecules and in the diagnosis of variety of human diseases. However, detection of miRNAs is very challenging because of their short length and high sequence similarities within miRNA families. So, a simple-to-use, low-cost, and highly sensitive method for the detection of miRNAs is desirable. In this study, we demonstrate a novel bi-directional extension (BDE) assay. In the first step, a specific linear RT primer is hybridized to 6-10 base pairs from the 3’-end of a target miRNA molecule and then reverse transcribed to generate a cDNA strand. After reverse transcription, the cDNA was hybridized to the 3’-end which is BDE sequence; it played role as the PCR template. The PCR template was amplified in an SYBR green-based quantitative real-time PCR. To prove the concept, we used human brain total RNA. It could be detected quantitatively in the range of seven orders of magnitude with excellent linearity and reproducibility. To evaluate the performance of BDE assay, we contrasted sensitivity and specificity of the BDE assay against a commercially available poly (A) tailing method using miRNAs for let-7e extracted from A549 human epithelial lung cancer cells. The BDE assay displayed good performance compared with a poly (A) tailing method in terms of specificity and sensitivity; the CT values differed by 2.5 and the melting curve showed a sharper than poly (A) tailing methods. We have demonstrated an innovative, cost-effective BDE assay that allows improved sensitivity and specificity in detection of miRNAs. Dynamic range of the SYBR green-based RT-qPCR for miR-145 could be represented quantitatively over a range of 7 orders of magnitude from 0.1 pg to 1.0 μg of human brain total RNA. Finally, the BDE assay for detection of miRNA species such as let-7e shows good performance compared with a poly (A) tailing method in terms of specificity and sensitivity. Thus BDE proves a simple, low cost, and highly sensitive assay for various miRNAs and should provide significant contributions in research on miRNA biology and application of disease diagnostics with miRNAs as targets.Keywords: bi-directional extension (BDE), microRNA (miRNA), poly (A) tailing assay, reverse transcription, RT-qPCR
Procedia PDF Downloads 166547 The Composition of Biooil during Biomass Pyrolysis at Various Temperatures
Authors: Zoltan Sebestyen, Eszter Barta-Rajnai, Emma Jakab, Zsuzsanna Czegeny
Abstract:
Extraction of the energy content of lignocellulosic biomass is one of the possible pathways to reduce the greenhouse gas emission derived from the burning of the fossil fuels. The application of the bioenergy can mitigate the energy dependency of a country from the foreign natural gas and the petroleum. The diversity of the plant materials makes difficult the utilization of the raw biomass in power plants. This problem can be overcome by the application of thermochemical techniques. Pyrolysis is the thermal decomposition of the raw materials under inert atmosphere at high temperatures, which produces pyrolysis gas, biooil and charcoal. The energy content of these products can be exploited by further utilization. The differences in the chemical and physical properties of the raw biomass materials can be reduced by the use of torrefaction. Torrefaction is a promising mild thermal pretreatment method performed at temperatures between 200 and 300 °C in an inert atmosphere. The goal of the pretreatment from a chemical point of view is the removal of water and the acidic groups of hemicelluloses or the whole hemicellulose fraction with minor degradation of cellulose and lignin in the biomass. Thus, the stability of biomass against biodegradation increases, while its energy density increases. The volume of the raw materials decreases so the expenses of the transportation and the storage are reduced as well. Biooil is the major product during pyrolysis and an important by-product during torrefaction of biomass. The composition of biooil mostly depends on the quality of the raw materials and the applied temperature. In this work, thermoanalytical techniques have been used to study the qualitative and quantitative composition of the pyrolysis and torrefaction oils of a woody (black locust) and two herbaceous samples (rape straw and wheat straw). The biooil contains C5 and C6 anhydrosugar molecules, as well as aromatic compounds originating from hemicellulose, cellulose, and lignin, respectively. In this study, special emphasis was placed on the formation of the lignin monomeric products. The structure of the lignin fraction is different in the wood and in the herbaceous plants. According to the thermoanalytical studies the decomposition of lignin starts above 200 °C and ends at about 500 °C. The lignin monomers are present among the components of the torrefaction oil even at relatively low temperatures. We established that the concentration and the composition of the lignin products vary significantly with the applied temperature indicating that different decomposition mechanisms dominate at low and high temperatures. The evolutions of decomposition products as well as the thermal stability of the samples were measured by thermogravimetry/mass spectrometry (TG/MS). The differences in the structure of the lignin products of woody and herbaceous samples were characterized by the method of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS). As a statistical method, principal component analysis (PCA) has been used to find correlation between the composition of lignin products of the biooil and the applied temperatures.Keywords: pyrolysis, torrefaction, biooil, lignin
Procedia PDF Downloads 331546 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets
Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar
Abstract:
Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).Keywords: coupled effect, heat transfer, sink, solid rocket motors, source
Procedia PDF Downloads 223545 Oxalate Method for Assessing the Electrochemical Surface Area for Ni-Based Nanoelectrodes Used in Formaldehyde Sensing Applications
Authors: S. Trafela, X. Xua, K. Zuzek Rozmana
Abstract:
In this study, we used an accurate and precise method to measure the electrochemically active surface areas (Aecsa) of nickel electrodes. Calculated Aecsa is really important for the evaluation of an electro-catalyst’s activity in electrochemical reaction of different organic compounds. The method involves the electrochemical formation of Ni(OH)₂ and NiOOH in the presence of adsorbed oxalate in alkaline media. The studies were carried out using cyclic voltammetry with polycrystalline nickel as a reference material and electrodeposited nickel nanowires, homogeneous and heterogeneous nickel films. From cyclic voltammograms, the charge (Q) values for the formation of Ni(OH)₂ and NiOOH surface oxides were calculated under various conditions. At sufficiently fast potential scan rates (200 mV s⁻¹), the adsorbed oxalate limits the growth of the surface hydroxides to a monolayer. Although the Ni(OH)₂/NiOOH oxidation peak overlaps with the oxygen evolution reaction, in the reverse scan, the NiOOH/ Ni(OH)₂ reduction peak is well-separated from other electrochemical processes and can be easily integrated. The values of these integrals were used to correlate experimentally measured charge density with an electrochemically active surface layer. The Aecsa of the nickel nanowires, homogeneous and heterogeneous nickel films were calculated to be Aecsa-NiNWs = 4.2066 ± 0.0472 cm², Aecsa-homNi = 1.7175 ± 0.0503 cm² and Aecsa-hetNi = 2.1862 ± 0.0154 cm². These valuable results were expanded and used in electrochemical studies of formaldehyde oxidation. As mentioned nickel nanowires, heterogeneous and homogeneous nickel films were used as simple and efficient sensor for formaldehyde detection. For this purpose, electrodeposited nickel electrodes were modified in 0.1 mol L⁻¹ solution of KOH in order to expect electrochemical activity towards formaldehyde. The investigation of the electrochemical behavior of formaldehyde oxidation in 0.1 mol L⁻¹ NaOH solution at the surface of modified nickel nanowires, homogeneous and heterogeneous nickel films were carried out by means of electrochemical techniques such as cyclic voltammetric and chronoamperometric methods. From investigations of effect of different formaldehyde concentrations (from 0.001 to 0.1 mol L⁻¹) on electrochemical signal - current we provided catalysis mechanism of formaldehyde oxidation, detection limit and sensitivity of nickel electrodes. The results indicated that nickel electrodes participate directly in the electrocatalytic oxidation of formaldehyde. In the overall reaction, formaldehyde in alkaline aqueous solution exists predominantly in form of CH₂(OH)O⁻, which is oxidized to CH₂(O)O⁻. Taking into account the determined (Aecsa) values we have been able to calculate the sensitivities: 7 mA mol L⁻¹ cm⁻² for nickel nanowires, 3.5 mA mol L⁻¹ cm⁻² for heterogeneous nickel film and 2 mA mol L⁻¹ cm⁻² for heterogeneous nickel film. The detection limit was 0.2 mM for nickel nanowires, 0.5 mM for porous Ni film and 0.8 mM for homogeneous Ni film. All of these results make nickel electrodes capable for further applications.Keywords: electrochemically active surface areas, nickel electrodes, formaldehyde, electrocatalytic oxidation
Procedia PDF Downloads 162544 The Relationship between Violence against Women and Levels of Self-Esteem in Urban Informal Settlements of Mumbai, India: A Cross-Sectional Study
Authors: A. Bentley, A. Prost, N. Daruwalla, D. Osrin
Abstract:
Background: This study aims to investigate the relationship between experiences of violence against women in the family, and levels of self-esteem in women residing in informal settlement (slum) areas of Mumbai, India. The authors hypothesise that violence against women in Indian households extends beyond that of intimate partner violence (IPV), to include other members of the family and that experiences of violence are associated with lower levels of self-esteem. Methods: Experiences of violence were assessed through a cross-sectional survey of 598 women, including questions about specific acts of emotional, economic, physical and sexual violence across different time points, and the main perpetrator of each. Self-esteem was assessed using the Rosenberg self-esteem questionnaire. A global score for self-esteem was calculated and the relationship between violence in the past year and Rosenberg self-esteem score was assessed using multivariable linear regression models, adjusted for years of education completed, and clustering using robust standard errors. Results: 482 (81%) women consented to interview. On average, they were 28.5 years old, had completed 6 years of education and had been married 9.5 years. 88% were Muslim and 46% lived in joint families. 44% of women had experienced at least one act of violence in their lifetime (33% emotional, 22% economic, 24% physical, 12% sexual). Of the women who experienced violence after marriage, 70% cited a perpetrator other than the husband for at least one of the acts. 5% had low self-esteem (Rosenberg score < 15). For women who experienced emotional violence in the past year, the Rosenberg score was 2.6 points lower (p < 0.001). It was 1.2 points lower (p = 0.03) for women who experienced economic violence. For physical or sexual violence in the past year, no statistically significant relationship with Rosenberg score was seen. However, for a one-unit increase in the number of different acts of each type of violence experienced in the past year, a decrease in Rosenberg score was seen (-0.62 for emotional, -0.76 for economic, -0.53 for physical and -0.47 for sexual; p < 0.05 for all). Discussion: The high prevalence of violence experiences across the lifetime was likely due to the detailed assessment of violence and the inclusion of perpetrators within the family other than the husband. Experiences of emotional or economic violence in the past year were associated with lower Rosenberg scores and therefore lower self-esteem, but no relationship was seen between experiences of physical or sexual violence and Rosenberg score overall. For all types of violence in the past year, a greater number of different acts were associated with a decrease in Rosenberg score. Emotional violence showed the strongest relationship with self-esteem, but for all types of violence the more complex the pattern of perpetration with different methods used, the lower the levels of self-esteem. Due to the cross-sectional nature of the study causal directionality cannot be attributed. Further work to investigate the relationship between severity of violence and self-esteem and whether self-esteem mediates relationships between violence and poorer mental health would be beneficial.Keywords: family violence, India, informal settlements, Rosenberg self-esteem scale, self-esteem, violence against women
Procedia PDF Downloads 126