Search results for: optimal transverse shape
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5420

Search results for: optimal transverse shape

890 The Study of Periodontal Health Status in Menopausal Women with Osteoporosis Referred to Rheumatology Clinics in Yazd and Healthy People

Authors: Mahboobe Daneshvar

Abstract:

Introduction: Clinical studies on the effect of systemic conditions on periodontal diseases have shown that some systemic deficiencies may provide grounds for the onset of periodontal diseases. One of these systemic problems is osteoporosis, which may be a risk factor for the onset and exacerbation of periodontitis. This study tends to evaluate periodontal indices in osteoporotic menopausal women and compare them with healthy controls. Materials and Methods: In this case-control study, participants included 45-75-year-old menopausal women referred to rheumatology wards of the Khatamolanbia Clinic and Shahid Sadoughi Hospital in Yazd; Their bone density was determined by DEXA-scan and by imaging the femoral-lumbar bone. Thirty patients with osteoporosis and 30 subjects with normal BMD were selected. Then, informed consent was obtained for participation in the study. During the clinical examinations, tooth loss (TL), plaque index (PI), gingival recession, pocket probing depth (PPD), clinical attachment loss (CAL), and tooth mobility (TM) were measured to evaluate the periodontal status. These clinical examinations were performed to determine the periodontal status by catheter, mirror and probe. Results: During the evaluation, there was no significant difference in PPD, PI, TM, gingival recession, and CAL between case and control groups (P-value>0.05); that is, osteoporosis has no effect on the above factors. These periodontal factors are almost the same in both healthy and patient groups. In the case of missing teeth, the following results were obtained: the mean of missing teeth was 22.173% of the total teeth in the case group and 18.583% of the total teeth in the control group. In the study of the missing teeth in the case and control groups, there was a significant relationship between case and control groups (P-value = 0.025). Conclusion: In fact, since periodontal disease is multifactorial and microbial plaque is the main cause, osteoporosis is considered a predisposing factor in exacerbation or persistence of periodontal disease. In patients with osteoporosis, usually pathological fractures, hormonal changes, and aging lead to reduced physical activity and affect oral health, which leads to the manifestation of periodontal disease. But this disease increases tooth loss by changing the shape and structure of bone trabeculae and weakening them. Osteoporosis does not seem to be a deterministic factor in the incidence of periodontal disease, since it affects bone quality rather than bone quantity.

Keywords: plaque index, Osteoporosis, tooth mobility, periodontal packet

Procedia PDF Downloads 67
889 Evaluation of Stress Relief using Ultrasonic Peening in GTAW Welding and Stress Corrosion Cracking (SCC) in Stainless Steel, and Comparison with the Thermal Method

Authors: Hamidreza Mansouri

Abstract:

In the construction industry, the lifespan of a metal structure is directly related to the quality of welding. In most metal structures, the welded area is considered critical and is one of the most important factors in design. To date, many fracture incidents caused by these types of cracks have occurred. Various methods exist to increase the lifespan of welds to prevent failure in the welded area. Among these methods, the application of ultrasonic peening, in addition to the stress relief process, can manually and more precisely adjust the geometry of the weld toe and prevent stress concentration in this part. This research examined Gas Tungsten Arc Welding (GTAW) on common structural steels and 316 stainless steel, which require precise welding, to predict the optimal condition. The GTAW method was used to create residual stress; two samples underwent ultrasonic stress relief, and for comparison, two samples underwent thermal stress relief. Also, no treatment was considered for two samples. The residual stress of all six pieces was measured by X-Ray Diffraction (XRD) method. Then, the two ultrasonically stress-relieved samples and two untreated samples were exposed to a corrosive environment to initiate cracking and determine the effectiveness of the ultrasonic stress relief method. Thus, the residual stress caused by GTAW in the samples decreased by 3.42% with thermal treatment and by 7.69% with ultrasonic peening. Furthermore, the results show that the untreated sample developed cracks after 740 hours, while the ultrasonically stress-relieved piece showed no cracks. Given the high costs of welding and post-welding zone modification processes, finding an economical, effective, and comprehensive method that has the least limitations alongside a broad spectrum of usage is of great importance. Therefore, the impact of various ultrasonic peening stress relief parameters and the selection of the best stress relief parameter to achieve the longest lifespan for the weld area is highly significant.

Keywords: GTAW welding, stress corrosion cracking(SCC), thermal method, ultrasonic peening.

Procedia PDF Downloads 45
888 The Use of Coronary Calcium Scanning for Cholesterol Assessment and Management

Authors: Eva Kirzner

Abstract:

Based on outcome studies published over the past two decades, in 2018, the ACC/AHA published new guidelines for the management of hypercholesterolemia that incorporate the use of coronary artery calcium (CAC) scanning as a decision tool for ascertaining which patients may benefit from statin therapy. This use is based on the recognition that the absence of calcium on CAC scanning (i.e., a CAC score of zero) usually signifies the absence of significant atherosclerotic deposits in the coronary arteries. Specifically, in patients with a high risk for atherosclerotic cardiovascular disease (ASCVD), initiation of statin therapy is generally recommended to decrease ASCVD risk. However, among patients with intermediate ASCVD risk, the need for statin therapy is less certain. However, there is a need for new outcome studies that provide evidence that the management of hypercholesterolemia based on these new ACC/AHA recommendations is safe for patients. Based on a Pub-Med and Google Scholar literature search, four relevant population-based or patient-based cohort studies that studied the relationship between CAC scanning, risk assessment or mortality, and statin therapy that were published between 2017 and 2021 were identified (see references). In each of these studies, patients were assessed for their baseline risk for atherosclerotic cardiovascular disease (ASCVD) using the Pooled Cohorts Equation (PCE), an ACC/AHA calculator for determining patient risk based on assessment of patient age, gender, ethnicity, and coronary artery disease risk factors. The combined findings of these four studies provided concordant evidence that a zero CAC score defines patients who remain at low clinical risk despite the non-use of statin therapy. Thus, these new studies confirm the use of CAC scanning as a safe tool for reducing the potential overuse of statin therapy among patients with zero CAC scores. Incorporating these new data suggest the following best practice: (1) ascertain ASCVD risk according to the PCE in all patients; (2) following an initial attempt trial to lower ASCVD risk with optimal diet among patients with elevated ASCVD risk, initiate statin therapy for patients who have a high ASCVD risk score; (3) if the ASCVD score is intermediate, refer patients for CAC scanning; and (4) and if the CAC score is zero among the intermediate risk ASCVD patients, statin therapy can be safely withheld despite the presence of an elevated serum cholesterol level.

Keywords: cholesterol, cardiovascular disease, statin therapy, coronary calcium

Procedia PDF Downloads 107
887 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning

Authors: Michael A. Sprayberry, Vincent C. Paquit

Abstract:

Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.

Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization

Procedia PDF Downloads 87
886 Commercial Winding for Superconducting Cables and Magnets

Authors: Glenn Auld Knierim

Abstract:

Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.

Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable

Procedia PDF Downloads 137
885 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply

Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele

Abstract:

In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.

Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant

Procedia PDF Downloads 172
884 The Impacts Of Hydraulic Conditions On The Fate, Transport And Accumulation Of Microplastics Pollution In The Aquatic Ecosystems

Authors: Majid Rasta, Xiaotao Shi, Mian Adnan Kakakhel, Yanqin Bai, Lao Liu, Jia Manke

Abstract:

Microplastics (MPs; particles <5 mm) pollution is considered as a globally pervasive threat to aquatic ecosystems, and many studies reported this pollution in rivers, wetlands, lakes, coastal waters and oceans. In the aquatic environments, settling and transport of MPs in water column and sediments are determined by different factors such as hydrologic characteristics, watershed pattern, rainfall events, hydraulic conditions, vegetation, hydrodynamics behavior of MPs, and physical features of particles (shape, size and density). In the meantime, hydraulic conditions (such as turbulence, high/low water speed flows or water stagnation) play a key role in the fate of MPs in aquatic ecosystems. Therefore, this study presents a briefly review on the effects of different hydraulic conditions on the fate, transport and accumulation of MPs in aquatic ecosystems. Generally, MPs are distributed horizontally and vertically in aquatic environments. The vertical distribution of MPs in the water column changes with different flow velocities. In the riverine, turbulent flow causing from the rapid water velocity and shallow depth may create a homogeneous mixture of MPs throughout the water column. While low velocity followed by low-turbulent waters can lead to the low level vertical mixing of MP particles in the water column. Consequently, the high numbers of MPs are expected to be found in the sediments of deep and wide channels as well as estuaries. In contrast, observing the lowest accumulation of MP particles in the sediments of straights of the rivers, places with the highest flow velocity is understandable. In the marine environment, hydrodynamic factors (e.g., turbulence, current velocity and residual circulation) can affect the sedimentation and transportation of MPs and thus change the distribution of MPs in the marine and coastal sediments. For instance, marine bays are known as the accumulation area of MPs due to poor hydrodynamic conditions. On the other hand, in the nearshore zone, the flow conditions are highly complex and dynamic. Experimental studies illustrated that maximum horizontal flow velocity in the sandy beach can predict the accumulation of MPs so that particles with high sinking velocities deposit in the lower water depths. As a whole, it can be concluded that the transport and accumulation of MPs in aquatic ecosystems are highly affected by hydraulic conditions. This study provided information about the impacts of hydraulic on MPs pollution. Further research on hydraulics and its relationship to the accumulation of MPs in aquatic ecosystems is needed to increase insights into this pollution.

Keywords: microplastics pollution, hydraulic, transport, accumulation

Procedia PDF Downloads 67
883 Co-Development of an Assisted Manual Harvesting Tool for Peach Palm That Avoids the Harvest in Heights

Authors: Mauricio Quintero Angel, Alexander Pereira, Selene Alarcón

Abstract:

One of the elements of greatest importance in agricultural production is the harvesting; an activity associated to different occupational health risks such as harvesting in high altitudes, the transport of heavy materials and the application of excessive muscle strain that leads to muscular-bone disorders. Therefore, there is an urgent necessity to improve and validate interventions to reduce exposition and risk to harvesters. This article has the objective of describing the co-development under the ergonomic analysis framework of an assisted manual harvesting tool for peach palm oriented to reduce the risk of death and accidents as it avoid the harvest in heights. The peach palm is a palm tree that is cultivated in Colombia, Perú, Brasil, Costa Rica, among others and that reaches heights of over 20 m, with stipes covered with spines. The fruits are drupes of variable size. For the harvesting of peach palm, in Colombia farmers use the “Marota” or “Climber”, a tool in a closed X shape built in wood, that has two supports adjusted at the stipe, that elevate alternately until reaching a point high enough to grab the bunch that is brought down using a rope. An activity of high risk since it is done at a high altitude without any type of protection and safety measures. The Marota is alternated with a rod, which as variable height between 5 and 12 Meters with a harness system at one end to hold the bunch that is lowered with the whole system (bamboo bunch). The rod is used from the ground or from the Marota in height. As an alternative to traditional tools, the Bajachonta was co-developed with farmers, a tool that employs a traditional bamboo hook system with modifications, to be able to hold it with a rope that passes through a pulley. Once the bunch is hitched, the hook system is detached and this stays attached to the peduncle of the palm tree, afterwards through a pulling force being exerted towards the ground by tensioning the rope, the bunch comes loose to be taken down using a rope and the pulley system to the ground, reducing the risk and efforts in the operation. The bajachonta was evaluated in tree productive zones of Colombia, with innovative farmers, were the adoption is highly probable, with some modifications to improve its efficiency and effectiveness, keeping in mind that the farmers perceive in it an advantage in the reduction of death and accidents by not having to harvest in heights.

Keywords: assisted harvesting, ergonomics, harvesting in high altitudes, participative design, peach palm

Procedia PDF Downloads 402
882 New Roles of Telomerase and Telomere-Associated Proteins in the Regulation of Telomere Length

Authors: Qin Yang, Fan Zhang, Juan Du, Chongkui Sun, Krishna Kota, Yun-Ling Zheng

Abstract:

Telomeres are specialized structures at chromosome ends consisting of tandem repetitive DNA sequences [(TTAGGG)n in humans] and associated proteins, which are necessary for telomere function. Telomere lengths are tightly regulated within a narrow range in normal human somatic cells, the basis of cellular senescence and aging. Previous studies have extensively focused on how short telomeres are extended and have demonstrated that telomerase plays a central role in telomere maintenance through elongating the short telomeres. However, the molecular mechanisms of regulating excessively long telomeres are unknown. Here, we found that telomerase enzymatic component hTERT plays a dual role in the regulation of telomeres length. We analyzed single telomere alterations at each chromosomal end led to the discoveries that hTERT shortens excessively long telomeres and elongates short telomeres simultaneously, thus maintaining the optimal telomere length at each chromosomal end for an efficient protection. The hTERT-mediated telomere shortening removes large segments of telomere DNA rapidly without inducing telomere dysfunction foci or affecting cell proliferation, thus it is mechanistically distinct from rapid telomere deletion. We found that expression of hTERT generates telomeric circular DNA, suggesting that telomere homologous recombination may be involved in this telomere shortening process. Moreover, the hTERT-mediated telomere shortening is required its enzymatic activity, but telomerase RNA component hTR is not involved in it. Furthermore, shelterin protein TPP1 interacts with hTERT and recruits it on telomeres to mediate telomere shortening. In addition, telomere-associated proteins, DKC1 and TCAB1 also play roles in this process. This novel hTERT-mediated telomere shortening mechanism not only exists in cancer cells, but also in primary human cells. Thus, the hTERT-mediated telomere shortening is expected to shift the paradigm on current molecular models of telomere length maintenance, with wide-reaching consequences in cancer and aging fields.

Keywords: aging, hTERT, telomerase, telomeres, human cells

Procedia PDF Downloads 422
881 Personalized Infectious Disease Risk Prediction System: A Knowledge Model

Authors: Retno A. Vinarti, Lucy M. Hederman

Abstract:

This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.

Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk

Procedia PDF Downloads 238
880 Neural Correlates of Attention Bias to Threat during the Emotional Stroop Task in Schizophrenia

Authors: Camellia Al-Ibrahim, Jenny Yiend, Sukhwinder S. Shergill

Abstract:

Background: Attention bias to threat play a role in the development, maintenance, and exacerbation of delusional beliefs in schizophrenia in which patients emphasize the threatening characteristics of stimuli and prioritise them for processing. Cognitive control deficits arise when task-irrelevant emotional information elicits attentional bias and obstruct optimal performance. This study is investigating neural correlates of interference effect of linguistic threat and whether these effects are independent of delusional severity. Methods: Using an event-related functional magnetic resonance imaging (fMRI), neural correlates of interference effect of linguistic threat during the emotional Stroop task were investigated and compared patients with schizophrenia with high (N=17) and low (N=16) paranoid symptoms and healthy controls (N=20). Participants were instructed to identify the font colour of each word presented on the screen as quickly and accurately as possible. Stimuli types vary between threat-relevant, positive and neutral words. Results: Group differences in whole brain effects indicate decreased amygdala activity in patients with high paranoid symptoms compared with low paranoid patients and healthy controls. Regions of interest analysis (ROI) validated our results within the amygdala and investigated changes within the striatum showing a pattern of reduced activation within the clinical group compared to healthy controls. Delusional severity was associated with significant decreased neural activity in the striatum within the clinical group. Conclusion: Our findings suggest that the emotional interference mediated by the amygdala and striatum may reduce responsiveness to threat-related stimuli in schizophrenia and that attenuation of fMRI Blood-oxygen-level dependent (BOLD) signal within these areas might be influenced by the severity of delusional symptoms.

Keywords: attention bias, fMRI, Schizophrenia, Stroop

Procedia PDF Downloads 196
879 Landfill Site Selection Using Multi-Criteria Decision Analysis A Case Study for Gulshan-e-Iqbal Town, Karachi

Authors: Javeria Arain, Saad Malik

Abstract:

The management of solid waste is a crucial and essential aspect of urban environmental management especially in a city with an ever increasing population such as Karachi. The total amount of municipal solid waste generated from Gulshan e Iqbal town on average is 444.48 tons per day and landfill sites are a widely accepted solution for final disposal of this waste. However, an improperly selected site can have immense environmental, economical and ecological impacts. To select an appropriate landfill site a number of factors should be kept into consideration to minimize the potential hazards of solid waste. The purpose of this research is to analyse the study area for the construction of an appropriate landfill site for disposal of municipal solid waste generated from Gulshan e-Iqbal Town by using geospatial techniques considering hydrological, geological, social and geomorphological factors. This was achieved using analytical hierarchy process and fuzzy analysis as a decision support tool with integration of geographic information sciences techniques. Eight most critical parameters, relevant to the study area, were selected. After generation of thematic layers for each parameter, overlay analysis was performed in ArcGIS 10.0 software. The results produced by both methods were then compared with each other and the final suitability map using AHP shows that 19% of the total area is Least Suitable, 6% is Suitable but avoided, 46% is Moderately Suitable, 26% is Suitable, 2% is Most Suitable and 1% is Restricted. In comparison the output map of fuzzy set theory is not in crisp logic rather it provides an output map with a range of 0-1, where 0 indicates least suitable and 1 indicates most suitable site. Considering the results it is deduced that the northern part of the city is appropriate for constructing the landfill site though a final decision for an optimal site could be made after field survey and considering economical and political factors.

Keywords: Analytical Hierarchy Process (AHP), fuzzy set theory, Geographic Information Sciences (GIS), Multi-Criteria Decision Analysis (MCDA)

Procedia PDF Downloads 502
878 iPSCs More Effectively Differentiate into Neurons on PLA Scaffolds with High Adhesive Properties for Primary Neuronal Cells

Authors: Azieva A. M., Yastremsky E. V., Kirillova D. A., Patsaev T. D., Sharikov R. V., Kamyshinsky R. A., Lukanina K. I., Sharikova N. A., Grigoriev T. E., Vasiliev A. L.

Abstract:

Adhesive properties of scaffolds, which predominantly depend on the chemical and structural features of their surface, play the most important role in tissue engineering. The basic requirements for such scaffolds are biocompatibility, biodegradation, high cell adhesion, which promotes cell proliferation and differentiation. In many cases, synthetic polymers scaffolds have proven advantageous because they are easy to shape, they are tough, and they have high tensile properties. The regeneration of nerve tissue still remains a big challenge for medicine, and neural stem cells provide promising therapeutic potential for cell replacement therapy. However, experiments with stem cells have their limitations, such as low level of cell viability and poor control of cell differentiation. Whereas the study of already differentiated neuronal cell culture obtained from newborn mouse brain is limited only to cell adhesion. The growth and implantation of neuronal culture requires proper scaffolds. Moreover, the polymer scaffolds implants with neuronal cells could demand specific morphology. To date, it has been proposed to use numerous synthetic polymers for these purposes, including polystyrene, polylactic acid (PLA), polyglycolic acid, and polylactide-glycolic acid. Tissue regeneration experiments demonstrated good biocompatibility of PLA scaffolds, despite the hydrophobic nature of the compound. Problem with poor wettability of the PLA scaffold surface could be overcome in several ways: the surface can be pre-treated by poly-D-lysine or polyethyleneimine peptides; roughness and hydrophilicity of PLA surface could be increased by plasma treatment, or PLA could be combined with natural fibers, such as collagen or chitosan. This work presents a study of adhesion of both induced pluripotent stem cells (iPSCs) and mouse primary neuronal cell culture on the polylactide scaffolds of various types: oriented and non-oriented fibrous nonwoven materials and sponges – with and without the effect of plasma treatment and composites with collagen and chitosan. To evaluate the effect of different types of PLA scaffolds on the neuronal differentiation of iPSCs, we assess the expression of NeuN in differentiated cells through immunostaining. iPSCs more effectively differentiate into neurons on PLA scaffolds with high adhesive properties for primary neuronal cells.

Keywords: PLA scaffold, neurons, neuronal differentiation, stem cells, polylactid

Procedia PDF Downloads 80
877 Chemical, Physical and Microbiological Characteristics of a Texture-Modified Beef- Based 3D Printed Functional Product

Authors: Elvan G. Bulut, Betul Goksun, Tugba G. Gun, Ozge Sakiyan Demirkol, Kamuran Ayhan, Kezban Candogan

Abstract:

Dysphagia, difficulty in swallowing solid foods and thin liquids, is one of the common health threats among the elderly who require foods with modified texture in their diet. Although there are some commercial food formulations or hydrocolloids to thicken the liquid foods for dysphagic individuals, there is still a need for developing and offering new food products with enriched nutritional, textural and sensory characteristics to safely nourish these patients. 3D food printing is an appealing alternative in creating personalized foods for this purpose with attractive shape, soft and homogenous texture. In order to modify texture and prevent phase separation, hydrocolloids are generally used. In our laboratory, an optimized 3D printed beef-based formulation specifically for people with swallowing difficulties was developed based on the research project supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK Project # 218O017). The optimized formulation obtained from response surface methodology was 60% beef powder, 5.88% gelatin, and 0.74% kappa-carrageenan (all in a dry basis). This product was enriched with powders of freeze-dried beet, celery, and red capia pepper, butter, and whole milk. Proximate composition (moisture, fat, protein, and ash contents), pH value, CIE lightness (L*), redness (a*) and yellowness (b*), and color difference (ΔE*) values were determined. Counts of total mesophilic aerobic bacteria (TMAB), lactic acid bacteria (LAB), mold and yeast, total coliforms were conducted, and detection of coagulase positive S. aureus, E. coli, and Salmonella spp. were performed. The 3D printed products had 60.11% moisture, 16.51% fat, 13.68% protein, and 1.65% ash, and the pH value was 6.19, whereas the ΔE* value was 3.04. Counts of TMAB, LAB, mold and yeast and total coliforms before and after 3D printing were 5.23-5.41 log cfu/g, < 1 log cfu/g, < 1 log cfu/g, 2.39-2.15 log EMS/g, respectively. Coagulase positive S. aureus, E. coli, and Salmonella spp. were not detected in the products. The data obtained from this study based on determining some important product characteristics of functional beef-based formulation provides an encouraging basis for future research on the subject and should be useful in designing mass production of 3D printed products of similar composition.

Keywords: beef, dysphagia, product characteristics, texture-modified foods, 3D food printing

Procedia PDF Downloads 107
876 Degradation of Emerging Pharmaceuticals by Gamma Irradiation Process

Authors: W. Jahouach-Rabai, J. Aribi, Z. Azzouz-Berriche, R. Lahsni, F. Hosni

Abstract:

Gamma irradiation applied in removing pharmaceutical contaminants from wastewater is an effective advanced oxidation process (AOP), considered as an alternative to conventional water treatment technologies. In this purpose, the degradation efficiency of several detected contaminants under gamma irradiation was evaluated. In fact, radiolysis of organic pollutants in aqueous solutions produces powerful reactive species, essentially hydroxyl radical ( ·OH), able to destroy recalcitrant pollutants in water. Pharmaceuticals considered in this study are aqueous solutions of paracetamol, ibuprofen, and diclofenac at different concentrations 0.1-1 mmol/L, which were treated with irradiation doses from 3 to 15 kGy. The catalytic oxidation of these compounds by gamma irradiation was investigated using hydrogen peroxide (H₂O₂) as a convenient oxidant. Optimization of the main parameters influencing irradiation process, namely irradiation doses, initial concentration and oxidant volume (H₂O₂) were investigated, in the aim to release high degradation efficiency of considered pharmaceuticals. Significant modifications attributed to these parameters appeared in the variation of degradation efficiency, chemical oxygen demand removal (COD) and concentration of radio-induced radicals, confirming them synergistic effect to attempt total mineralization. Pseudo-first-order reaction kinetics could be used to depict the degradation process of these compounds. A sophisticated analytical study was released to quantify the detected radio-induced radicals (electron paramagnetic resonance spectroscopy (EPR) and high performance liquid chromatography (HPLC)). All results showed that this process is effective for the degradation of many pharmaceutical products in aqueous solutions due to strong oxidative properties of generated radicals mainly hydroxyl radical. Furthermore, the addition of an optimal amount of H₂O₂ was efficient to improve the oxidative degradation and contribute to the high performance of this process at very low doses (0.5 and 1 kGy).

Keywords: AOP, COD, hydroxyl radical, EPR, gamma irradiation, HPLC, pharmaceuticals

Procedia PDF Downloads 168
875 Creativity and Innovation in Postgraduate Supervision

Authors: Rajendra Chetty

Abstract:

The paper aims to address two aspects of postgraduate studies: interdisciplinary research and creative models of supervision. Interdisciplinary research can be viewed as a key imperative to solve complex problems. While excellent research requires a context of disciplinary strength, the cutting edge is often found at the intersection between disciplines. Interdisciplinary research foregrounds a team approach and information, methodologies, designs, and theories from different disciplines are integrated to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline. Our aim should also be to generate research that transcends the original disciplines i.e. transdisciplinary research. Complexity is characteristic of the knowledge economy, hence, postgraduate research and engaged scholarship should be viewed by universities as primary vehicles through which knowledge can be generated to have a meaningful impact on society. There are far too many ‘ordinary’ studies that fall into the realm of credentialism and certification as opposed to significant studies that generate new knowledge and provide a trajectory for further academic discourse. Secondly, the paper will look at models of supervision that are different to the dominant ‘apprentice’ or individual approach. A reflective practitioner approach would be used to discuss a range of supervision models that resonate well with the principles of interdisciplinarity, growth in the postgraduate sector and a commitment to engaged scholarship. The global demand for postgraduate education has resulted in increased intake and new demands to limited supervision capacity at institutions. Team supervision lodged within large-scale research projects, working with a cohort of students within a research theme, the journal article route of doctoral studies and the professional PhD are some of the models that provide an alternative to the traditional approach. International cooperation should be encouraged in the production of high-impact research and institutions should be committed to stimulating international linkages which would result in co-supervision and mobility of postgraduate students and global significance of postgraduate research. International linkages are also valuable in increasing the capacity for supervision at new and developing universities. Innovative co-supervision and joint-degree options with global partners should be explored within strategic planning for innovative postgraduate programmes. Co-supervision of PhD students is probably the strongest driver (besides funding) for collaborative research as it provides the glue of shared interest, advantage and commitment between supervisors. The students’ field serves and informs the co-supervisors own research agendas and helps to shape over-arching research themes through shared research findings.

Keywords: interdisciplinarity, internationalisation, postgraduate, supervision

Procedia PDF Downloads 235
874 Factors Controlling Marine Shale Porosity: A Case Study between Lower Cambrian and Lower Silurian of Upper Yangtze Area, South China

Authors: Xin Li, Zhenxue Jiang, Zhuo Li

Abstract:

Generally, shale gas is trapped within shale systems with low porosity and ultralow permeability as free and adsorbing states. Its production is controlled by properties, in terms of occurrence phases, gas contents, and percolation characteristics. These properties are all influenced by porous features. In this paper, porosity differences of marine shales were explored between Lower Cambrian shale and Lower Silurian shale of Sichuan Basin, South China. Both the two shales were marine shales with abundant oil-prone kerogen and rich siliceous minerals. Whereas Lower Cambrian shale (3.56% Ro) possessed a higher thermal degree than that of Lower Silurian shale (2.31% Ro). Samples were measured by a combination of organic-chemistry geology measurement, organic matter (OM) isolation, X-ray diffraction (XRD), N2 adsorption, and focused ion beam milling and scanning electron microscopy (FIB-SEM). Lower Cambrian shale presented relatively low pore properties, with averaging 0.008ml/g pore volume (PV), averaging 7.99m²/g pore surface area (PSA) and averaging 5.94nm average pore diameter (APD). Lower Silurian shale showed as relatively high pore properties, with averaging 0.015ml/g PV, averaging 10.53m²/g PSA and averaging 18.60nm APD. Additionally, fractal analysis indicated that the two shales presented discrepant pore morphologies, mainly caused by differences in the combination of pore types between the two shales. More specifically, OM-hosted pores with pin-hole shape and dissolved pores with dead-end openings were the main types in Lower Cambrian shale, while OM-hosted pore with a cellular structure was the main type in Lower Silurian shale. Moreover, porous characteristics of isolated OM suggested that OM of Lower Silurian shale was more capable than that of Lower Cambrian shale in the aspect of pore contribution. PV of isolated OM in Lower Silurian shale was almost 6.6 times higher than that in Lower Cambrian shale, and PSA of isolated OM in Lower Silurian shale was almost 4.3 times higher than that in Lower Cambrian shale. However, no apparent differences existed among samples with various matrix compositions. At late diagenetic or metamorphic epoch, extensive diagenesis overprints the effects of minerals on pore properties and OM plays the dominant role in pore developments. Hence, differences of porous features between the two marine shales highlight the effect of diagenetic degree on OM-hosted pore development. Consequently, distinctive pore characteristics may be caused by the different degrees of diagenetic evolution, even with similar matrix basics.

Keywords: marine shale, lower Cambrian, lower Silurian, om isolation, pore properties, om-hosted pore

Procedia PDF Downloads 131
873 Assessing Trainee Radiation Exposure in Fluoroscopy-Guided Procedures: An Analysis of Hp(3)

Authors: Ava Zarif Sanayei, Sedigheh Sina

Abstract:

During fluoroscopically guided procedures, healthcare workers, especially radiology trainees, are at risk of exposure to elevated radiation exposure. It is vital to prioritize their safety in such settings. However, there is limited data on their monthly or annual doses. This study aimed to evaluate the equivalent dose to the eyes of the student trainee, utilizing LiF: Mg, Ti (TLD-100) chips at the radiology department of a hospital in Shiraz, Iran. Initially, the dosimeters underwent calibration procedures with the assistance of ISO-PTW calibrated phantoms. Following this, a set of dosimeters was prepared To determine HP(3) value for a trainee involved in the main operation room and controlled area utilized for two months. Three TLD chips were placed in a holder and attached to her eyeglasses. Upon completion of the duration, the TLDs were read out using a Harshaw TLD reader. Results revealed that Hp(3) value was 0.31±0.04 mSv. Based on international recommendations, students in radiology training above 18 have an annual dose limit of 0.6 rem (6 mSv). Assuming a 12-month workload, staff radiation exposure stayed below the annual limit. However, the Trainee workload may vary due to different deeds. This study's findings indicate the need for consistent, precise dose monitoring in IR facilities. Students can undertake supervised internships for up to 500 hours, depending on their institution. These internships take place in health-focused environments offering radiology services, such as clinics, diagnostic imaging centers, and hospitals. Failure to do so might result in exceeding occupational radiation dose limits. A 0.5 mm lead apron effectively absorbs 99% of radiation. To ensure safety, technologists and staff need to wear this protective gear whenever they are in the room during procedures. Furthermore, maintaining a safe distance from the primary beam is crucial. In cases where patients need assistance and must be held for imaging, additional protective equipment, including lead goggles, gloves, and thyroid shields, should be utilized for optimal safety.

Keywords: annual dose limits, Hp(3), individual monitoring, radiation protection, TLD-100

Procedia PDF Downloads 69
872 Optimization of Shale Gas Production by Advanced Hydraulic Fracturing

Authors: Fazl Ullah, Rahmat Ullah

Abstract:

This paper shows a comprehensive learning focused on the optimization of gas production in shale gas reservoirs through hydraulic fracturing. Shale gas has emerged as an important unconventional vigor resource, necessitating innovative techniques to enhance its extraction. The key objective of this study is to examine the influence of fracture parameters on reservoir productivity and formulate strategies for production optimization. A sophisticated model integrating gas flow dynamics and real stress considerations is developed for hydraulic fracturing in multi-stage shale gas reservoirs. This model encompasses distinct zones: a single-porosity medium region, a dual-porosity average region, and a hydraulic fracture region. The apparent permeability of the matrix and fracture system is modeled using principles like effective stress mechanics, porous elastic medium theory, fractal dimension evolution, and fluid transport apparatuses. The developed model is then validated using field data from the Barnett and Marcellus formations, enhancing its reliability and accuracy. By solving the partial differential equation by means of COMSOL software, the research yields valuable insights into optimal fracture parameters. The findings reveal the influence of fracture length, diversion capacity, and width on gas production. For reservoirs with higher permeability, extending hydraulic fracture lengths proves beneficial, while complex fracture geometries offer potential for low-permeability reservoirs. Overall, this study contributes to a deeper understanding of hydraulic cracking dynamics in shale gas reservoirs and provides essential guidance for optimizing gas production. The research findings are instrumental for energy industry professionals, researchers, and policymakers alike, shaping the future of sustainable energy extraction from unconventional resources.

Keywords: fluid-solid coupling, apparent permeability, shale gas reservoir, fracture property, numerical simulation

Procedia PDF Downloads 66
871 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting

Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero

Abstract:

In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling‎) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.

Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling

Procedia PDF Downloads 132
870 Analysis of Radiation-Induced Liver Disease (RILD) and Evaluation of Relationship between Therapeutic Activity and Liver Clearance Rate with Tc-99m-Mebrofenin in Yttrium-90 Microspheres Treatment

Authors: H. Tanyildizi, M. Abuqebitah, I. Cavdar, M. Demir, L. Kabasakal

Abstract:

Aim: Whole liver radiation has the modest benefit in the treatment of unresectable hepatic metastases but the radiation doses must keep in control. Otherwise, RILD complications may arise. In this study, we aimed to calculate amount of maximum permissible activity (MPA) and critical organ absorbed doses with MIRD methodology, to evaluate tumour doses for treatment response and whole liver doses for RILD and to find optimal liver function test additionally. Materials and Methods: This study includes 29 patients who attended our nuclear medicine department suffering from Y-90 microspheres treatment. 10 mCi Tc-99m MAA was applied to the patients for dosimetry via IV. After the injection, whole body SPECT/CT images were taken in one hour. The minimum therapeutic tumour dose is on the point of being 120 Gy1, the amount of activities were calculated with MIRD methodology considering volumetric tumour/liver rate. A sub-working group was created with 11 patients randomly and liver clearance rate with Tc-99m-Mebrofenin was calculated according to Ekman formalism. Results: The volumetric tumour/liver rates were found between 33-66% (Maksimum Tolarable Dose (MTD) 48-52Gy3) for 4 patients, were found less than 33% (MTD 72Gy3) for 25 patients. According to these results the average amount of activity, mean liver dose and mean tumour dose were found 1793.9±1.46 MBq, 32.86±0.19 Gy, and 138.26±0.40 Gy. RILD was not observed in any patient. In sub-working group, the relationship between Bilirubin, Albumin, INR (which show presence of liver disease and its degree), liver clearance with Tc-99m-Mebrofenin and calculated activity amounts were found r=0.49, r=0.27, r=0.43, r=0.57, respectively. Discussions: The minimum tumour dose was found 120 Gy for positive dose-response relation. If volumetric tumour/liver rate was > 66%, dose 30 Gy; if volumetric tumour/liver rate 33-66%, dose escalation 48 Gy; if volumetric tumour/liver rate < 33%, dose 72 Gy. These dose limitations did not create RILD. Clearance measurement with Mebrofenin was concluded that the best method to determine the liver function. Therefore, liver clearance rate with Tc-99m-Mebrofenin should be considered in calculation of yttrium-90 microspheres dosimetry.

Keywords: clearance, dosimetry, liver, RILD

Procedia PDF Downloads 432
869 Valorization of Mineralogical Byproduct TiO₂ Using Photocatalytic Degradation of Organo-Sulfur Industrial Effluent

Authors: Harish Kuruva, Vedasri Bai Khavala, Tiju Thomas, K. Murugan, B. S. Murty

Abstract:

Industries are growing day to day to increase the economy of the country. The biggest problem with industries is wastewater treatment. Releasing these wastewater directly into the river is more harmful to human life and a threat to aquatic life. These industrial effluents contain many dissolved solids, organic/inorganic compounds, salts, toxic metals, etc. Phenols, pesticides, dioxins, herbicides, pharmaceuticals, and textile dyes were the types of industrial effluents and more challenging to degrade eco-friendly. So many advanced techniques like electrochemical, oxidation process, and valorization have been applied for industrial wastewater treatment, but these are not cost-effective. Industrial effluent degradation is complicated compared to commercially available pollutants (dyes) like methylene blue, methylene orange, rhodamine B, etc. TiO₂ is one of the widely used photocatalysts which can degrade organic compounds using solar light and moisture available in the environment (organic compounds converted to CO₂ and H₂O). TiO₂ is widely studied in photocatalysis because of its low cost, non-toxic, high availability, and chemically and physically stable in the atmosphere. This study mainly focused on valorizing the mineralogical product TiO₂ (IREL, India). This mineralogical graded TiO₂ was characterized and compared with its structural and photocatalytic properties (industrial effluent degradation) with the commercially available Degussa P-25 TiO₂. It was testified that this mineralogical TiO₂ has the best photocatalytic properties (particle shape - spherical, size - 30±5 nm, surface area - 98.19 m²/g, bandgap - 3.2 eV, phase - 95% anatase, and 5% rutile). The industrial effluent was characterized by TDS (total dissolved solids), ICP-OES (inductively coupled plasma – optical emission spectroscopy), CHNS (Carbon, Hydrogen, Nitrogen, and sulfur) analyzer, and FT-IR (fourier-transform infrared spectroscopy). It was observed that it contains high sulfur (S=11.37±0.15%), organic compounds (C=4±0.1%, H=70.25±0.1%, N=10±0.1%), heavy metals, and other dissolved solids (60 g/L). However, the organo-sulfur industrial effluent was degraded by photocatalysis with the industrial mineralogical product TiO₂. In this study, the industrial effluent pH value (2.5 to 10), catalyst concentration (50 to 150 mg) were varied, and effluent concentration (0.5 Abs) and light exposure time (2 h) were maintained constant. The best degradation is about 80% of industrial effluent was achieved at pH 5 with a concentration of 150 mg - TiO₂. The FT-IR results and CHNS analyzer confirmed that the sulfur and organic compounds were degraded.

Keywords: wastewater treatment, industrial mineralogical product TiO₂, photocatalysis, organo-sulfur industrial effluent

Procedia PDF Downloads 111
868 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 140
867 Study of Properties of Concretes Made of Local Building Materials and Containing Admixtures, and Their Further Introduction in Construction Operations and Road Building

Authors: Iuri Salukvadze

Abstract:

Development of Georgian Economy largely depends on its effective use of its transit country potential. The value of Georgia as the part of Europe-Asia corridor has increased; this increases the interest of western and eastern countries to Georgia as to the country that laid on the transit axes that implies transit infrastructure creation and development in Georgia. It is important to use compacted concrete with the additive in modern road construction industry. Even in the 21-century, concrete remains as the main vital constructive building material, therefore innovative, economic and environmentally protected technologies are needed. Georgian construction market requires the use of concrete of new generation, adaptation of nanotechnologies to the local realities that will give the ability to create multifunctional, nano-technological high effective materials. It is highly important to research their physical and mechanical states. The study of compacted concrete with the additives is necessary to use in the road construction in the future and to increase hardness of roads in Georgia. The aim of the research is to study the physical-mechanical properties of the compacted concrete with the additives based on the local materials. Any experimental study needs large number of experiments from one side in order to achieve high accuracy and optimal number of the experiments with minimal charges and in the shortest period of time from the other side. To solve this problem in practice, it is possible to use experiments planning static and mathematical methods. For the materials properties research we will use distribution hypothesis, measurements results by normal law according to which divergence of the obtained results is caused by the error of method and inhomogeneity of the object. As the result of the study, we will get resistible compacted concrete with additives for the motor roads that will improve roads infrastructure and give us saving rate while construction of the roads and their exploitation.

Keywords: construction, seismic protection systems, soil, motor roads, concrete

Procedia PDF Downloads 235
866 Building Exoskeletons for Seismic Retrofitting

Authors: Giuliana Scuderi, Patrick Teuffel

Abstract:

The proven vulnerability of the existing social housing building heritage to natural or induced earthquakes requires the development of new design concepts and conceptual method to preserve materials and object, at the same time providing new performances. An integrate intervention between civil engineering, building physics and architecture can convert the social housing districts from a critical part of the city to a strategic resource of revitalization. Referring to bio-mimicry principles the present research proposes a taxonomy with the exoskeleton of the insect, an external, light and resistant armour whose role is to protect the internal organs from external potentially dangerous inputs. In the same way, a “building exoskeleton”, acting from the outside of the building as an enclosing cage, can restore, protect and support the existing building, assuming a complex set of roles, from the structural to the thermal, from the aesthetical to the functional. This study evaluates the structural efficiency of shape memory alloys devices (SMADs) connecting the “building exoskeleton” with the existing structure to rehabilitate, in order to prevent the out-of-plane collapse of walls and for the passive dissipation of the seismic energy, with a calibrated operability in relation to the intensity of the horizontal loads. The two case studies of a masonry structure and of a masonry structure with concrete frame are considered, and for each case, a theoretical social housing building is exposed to earthquake forces, to evaluate its structural response with or without SMADs. The two typologies are modelled with the finite element program SAP2000, and they are respectively defined through a “frame model” and a “diagonal strut model”. In the same software two types of SMADs, called the 00-10 SMAD and the 05-10 SMAD are defined, and non-linear static and dynamic analyses, namely push over analysis and time history analysis, are performed to evaluate the seismic response of the building. The effectiveness of the devices in limiting the control joint displacements resulted higher in one direction, leading to the consideration of a possible calibrated use of the devices in the different walls of the building. The results show also a higher efficiency of the 00-10 SMADs in controlling the interstory drift, but at the same time the necessity to improve the hysteretic behaviour, to maximise the passive dissipation of the seismic energy.

Keywords: adaptive structure, biomimetic design, building exoskeleton, social housing, structural envelope, structural retrofitting

Procedia PDF Downloads 417
865 Examining Terrorism through a Constructivist Framework: Case Study of the Islamic State

Authors: Shivani Yadav

Abstract:

The Study of terrorism lends itself to the constructivist framework as constructivism focuses on the importance of ideas and norms in shaping interests and identities. Constructivism is pertinent to understand the phenomenon of a terrorist organization like the Islamic State (IS), which opportunistically utilizes radical ideas and norms to shape its ‘politics of identity’. This ‘identity’, which is at the helm of preferences and interests of actors, in turn, shapes actions. The paper argues that an effective counter-terrorism policy must recognize the importance of ideas in order to counter the threat arising from acts of radicalism and terrorism. Traditional theories of international relations, with an emphasis on state-centric security problematic, exhibit several limitations and problems in interpreting the phenomena of terrorism. With the changing global order, these theories have failed to adapt to the changing dimensions of terrorism, especially ‘newer’ actors like the Islamic State (IS). The paper observes that IS distinguishes itself from other terrorist organizations in the way that it recruits and spreads its propaganda. Not only are its methods different, but also its tools (like social media) are new. Traditionally, too, force alone has rarely been sufficient to counter terrorism, but it seems especially impossible to completely root out an organization like IS. Time is ripe to change the discourse around terrorism and counter-terrorism strategies. The counter-terrorism measures adopted by states, which primarily focus on mitigating threats to the national security of the state, are preoccupied with statist objectives of the continuance of state institutions and maintenance of order. This limitation prevents these theories from addressing the questions of justice and the ‘human’ aspects of ideas and identity. These counter-terrorism strategies adopt a problem-solving approach that attempts to treat the symptoms without diagnosing the disease. Hence, these restrictive strategies fail to look beyond calculated retaliation against violent actions in order to address the underlying causes of discontent pertaining to ‘why’ actors turn violent in the first place. What traditional theories also overlook is that overt acts of violence may have several causal factors behind them, some of which are rooted in the structural state system. Exploring these root causes through the constructivist framework helps to decipher the process of ‘construction of terror’ and to move beyond the ‘what’ in theorization in order to describe ‘why’, ‘how’ and ‘when’ terrorism occurs. Study of terrorism would much benefit from a constructivist analysis in order to explore non-military options while countering the ideology propagated by the IS.

Keywords: constructivism, counter terrorism, Islamic State, politics of identity

Procedia PDF Downloads 182
864 Long Term Survival after a First Transient Ischemic Attack in England: A Case-Control Study

Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski

Abstract:

Transient ischaemic attacks (TIAs) are warning signs for future strokes. TIA patients are at increased risk of stroke and cardio-vascular events after a first episode. A majority of studies on TIA focused on the occurrence of these ancillary events after a TIA. Long-term mortality after TIA received only limited attention. We undertook this study to determine the long-term hazards of all-cause mortality following a first episode of a TIA using anonymised electronic health records (EHRs). We used a retrospective case-control study using electronic primary health care records from The Health Improvement Network (THIN) database. Patients born prior to or in year 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general medical practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Weibull-Cox survival model which included both scale and shape effects and a random frailty effect of GP practice. 20,633 cases and 58,634 controls were included. Cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to matched controls (HR = 3.04, 95% CI (2.91 - 3.18)). The HRs for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to matched controls. Aspirin provided long-term survival benefits to cases. Cases aged 39-60 years on aspirin had HR of 0.93 (0.84 - 1.00), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5 years, 10 years and 15 years, respectively, compared to cases in the same age group who were not on antiplatelets. Similar beneficial effects of aspirin were observed in other age groups. There were no significant survival benefits with other antiplatelet options. No survival benefits of antiplatelet drugs were observed in controls. Our study highlights the excess long-term risk of death of TIA patients and cautions that TIA should not be treated as a benign condition. The study further recommends aspirin as the better option for secondary prevention for TIA patients compared to clopidogrel recommended by NICE guidelines. Management of risk factors and treatment strategies should be important challenges to reduce the burden of disease.

Keywords: dual antiplatelet therapy (DAPT), General Practice, Multiple Imputation, The Health Improvement Network(THIN), hazard ratio (HR), Weibull-Cox model

Procedia PDF Downloads 140
863 Prevalence and Associated Factors of Overweight and Obesity in Children with Intellectual Disability: A Cross-Sectional Study among Chinese Children

Authors: Jing-Jing Wang, Yang Gao, Heather H. M. Kwok, Wendy Y. J. Huang

Abstract:

Objectives: Intellectual disability (ID) ranks among the top 20 most costly disorders. A child with ID creates a wide set of challenges to the individual, family, and society, and overweight and obesity aggravate those challenges. People with ID have the right to attain optimal health like the rest of the population. They should be given priority to eliminate existing health inequities. Childhood obesity epidemic and associated factors among children, in general, has been well documented, while knowledge about overweight and obesity in children with ID is scarce. Methods: A cross-sectional study was conducted among 524 Chinese children with ID (males: 68.9%, mean age: 12.2 years) in Hong Kong in 2015. Children’s height and weight were measured at school. Parents, in the presence of their children, completed a self-administered questionnaire at home about the children’s physical activity (PA), eating habits, and sleep duration in a typical week as well as parenting practices regarding children’s eating and PA, and their socio-demographic characteristics. Multivariate logistic regression estimated the potential risk factors for children being overweight. Results: The prevalence of overweight and obesity in children with ID was 31.3%, which was higher than their general counterparts (18.7%-19.9%). Multivariate analyses revealed that the risk factors of overweight and obese in children with ID included: comorbidity with autism, the maternal side being overweight or obese, parenting practices with less pressure to eat more, children having shorter sleep duration, longer periods of sedentary behavior, and higher intake frequencies of sweetened food, fried food, and meats, fish, and eggs. Children born in other places, having snacks more frequently, and having irregular meals were also more likely to be overweight or obese, with marginal significance. Conclusions: Children with ID are more vulnerable to being overweight or obese than their typically developing counterparts. Identified risk factors in this study highlight a multifaceted approach to the involvement of parents as well as the modification of some children’s questionable behaviors to help them achieve a healthy weight.

Keywords: prevalence, risk factors, obesity, children with disability

Procedia PDF Downloads 128
862 Effectiveness of Educational and Supportive Interventions for Primiparous Women on Breastfeeding Outcomes: A Systematic Review and Meta-Analysis

Authors: Mei Sze Wong, Huanyu Mou, Wai-Tong Chien

Abstract:

Background: Breastmilk is the most nutritious food for infants to support their growth and protect them from infection. Therefore, breastfeeding promotion is an important topic for infant health; whereas, different educational and supportive approaches to interventions have been prompted and targeted at antenatal, postnatal, or both periods to promote and sustain exclusive breastfeeding. This systematic review aimed to identify the effective approaches of educational and supportive interventions to improve breastfeeding. Outcome measures were exclusive breastfeeding, partial breastfeeding, and breastfeeding self-efficacy, being analyzed in terms of ≤ 2 months, 3-5 months, and ≥ 6 months postpartum. Method: Eleven electronic databases and the reference lists of eligible articles were searched. English or Chinese articles of randomized controlled trials on educational and supportive intervention with the above breastfeeding outcomes over recent 20 years were searched. Quality appraisal and risk of bias of the studies were checked by Effective Public Health Practice Project tool and Revised Cochrane risk-of-bias tool, respectively. Results: 13 articles that met the inclusion criteria were included; and they had acceptable quality and risk of bias. The optimal structure, format, and delivery of the interventions significantly increased exclusive breastfeeding rate at ≤ 2 months and ≥ 6 months and breastfeeding self-efficacy at ≤ 2 months included: (a) delivering from antenatal to postnatal period, (b) multicomponent involving antenatal group education, postnatal individual breastfeeding coaching and telephone follow-ups, (c) both individual and group basis, (d) being guided by self-efficacy theory, and (e) having ≥ 3 sessions. Conclusion: The findings showed multicomponent theory-based interventions with ≥ 3 sessions that delivered across antenatal and postnatal period; using both face-to-face teaching and telephone follow-ups can be useful to enhance exclusive breastfeeding rate for more than 6 months and breastfeeding self-efficacy over the first two months of postpartum.

Keywords: breastfeeding self-efficacy, education, exclusive breastfeeding, primiparous, support

Procedia PDF Downloads 129
861 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa

Authors: Dimakatso Machetele, Kowiyou Yessoufou

Abstract:

Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.

Keywords: road accidents, South Africa, statistical modelling, trends

Procedia PDF Downloads 156