Search results for: bias injection attack
247 Change of Bone Density with Treatments of Intravenous Zoledronic Acid in Patients with Osteoporotic Distal Radial Fractures
Authors: Hong Je Kang, Young Chae Choi, Jin Sung Park, Isac Kim
Abstract:
Purpose: Osteoporotic fractures are an important among postmenopausal women. When osteoporotic distal radial fractures occur, osteoporosis must be treated to prevent the hip and spine fractures. Intravenous injection of Zoledronic acid is expected to improve preventing osteoporotic fractures. Many articles reported the effect of intravenous Zoledronic acid to BMD of the hip and spine fracture or non-fracture patients with low BMD. However, that with distal radial fractures has rarely been reported. Therefore, the authors decided to study the effect of Zoledronic acid in BMD score, bone union, and bone turnover markers in the patients who underwent volar plating due to osteoporotic distal radial fractures. Materials: From April 2018 to May 2022, postmenopausal women aged 55 years or older who had osteoporotic distal radial fractures and who underwent surgical treatment using volar plate fixation were included. Zoledronic acid (5mg) was injected intravenously between 3 and 5 days after surgery. BMD scores after 1 year of operation were compared with the initial scores. Bone turnover markers were measured before surgery, after 3 months, and after 1 year. Radiological follow-up was performed every 2 weeks until the bone union and at 1 year postoperatively. Clinical outcome indicators were measured one year after surgery, and the occurrence of side effects was observed. Result: Total of 23 patients were included, with a lumbar BMD T score of -2.89±0.2 before surgery to -2.27±0.3 one year after surgery (p=0.012) and a femoral neck BMD T score of -2.45±0.3 before surgery to -2.36±0.3 (p=0.041) after one year, and all were statistically significant. Measured as bone resorption markers, serum CTX-1 was 337.43±10.4 pg/mL before surgery, 160.86±8.7 pg/mL (p=0.022) after three months, and 250.12±12.7 pg/mL (p=0.031) after one year. Urinary NTX-1 was 39.24±2.2 ng/mL before surgery, 24.46±1.2 ng/mL (p=0.014) after three months and 30.35±1.6 ng/mL (p=0.042) after one year. Measured as bone formation markers, serum osteocalcin was 13.04±1.1 ng/mL before surgery, 8.84±0.7 ng/mL (p=0.037) after 3 months and 11.1±0.4 ng/mL (p=0.026) after one year. Serum bone-specific ALP was 11.24±0.9 IU/L before surgery, 8.25±0.9 IU/L (p=0.036) after three months, and 10.2±0.9 IU/L (p=0.027) after one year. All were statistically significant. All cases showed bone union within an average of 6.91±0.3 weeks without any signs of failure. Complications were found in 5 out of 23 cases (21.7%), such as headache, nausea, muscle pain, and fever. Conclusion: When Zoledronic acid was used, BMD was improved in both the spine and femoral neck. This may reduce the likelihood and subsequent morbidity of additional osteoporotic fractures. This study is meaningful in that there was no difference in the duration of bone union and radiological characteristics in patients with distal radial fractures administrated with intravenous BP early after the fractures, and improvement in BMD and bone turnover indicators was measured.Keywords: zeoldreonic acid, BMD, osteoporosis, distal radius
Procedia PDF Downloads 115246 The Double Standard: Ethical Issues and Gender Discrimination in Traditional Western Ethics
Authors: Merina Islam
Abstract:
The feminists have identified the traditional western ethical theories as basically male centered. Feminists are committed to develop a critique showing how the traditional western ethics together with traditional philosophy, irrespective of the claim for gender neutrality, all throughout remained gender-biased. This exclusion of women’s experiences from the moral discourse is justified on the ground that women cannot be moral agents, since they are not rational. By way of entailment, we are thus led to the position that virtues of traditional ethics, so viewed, can nothing but rational and hence male. The ears of traditional Western ethicists have been attuned to male rather than female ethical voices. Right from the Plato, Aristotle, Augustine, Aquinas, Rousseau, Kant, Hegel and even philosophers like Freud, Schopenhauer, Nietzsche and many others the dualism between reason-passion or mind and body started gaining prominence. These, according to them, have either intentionally excluded women or else have used certain male moral experience as the standard for all moral experiences, thereby resulting once again in exclusion of women’s experiences. Men are identified with rationality and hence contrasted with women whose sphere is believed to be that of emotion and feeling. This act of exclusion of women’s experience from moral discourse has given birth to a tradition that emphasizes reason over emotion, universal over the particular, and justice over caring. That patriarchy’s use of gender distinctions in the realm of Ethics has resulted in gender discriminations is an undeniable fact. Hence women’s moral agency is said to have often been denied, not simply by the act of exclusion of women from moral debate or sheer ignorance of their contributions, but through philosophical claims to the effect that women lack moral reason. Traditional or mainstream ethics cannot justify its claim for universality, objectivity and gender neutrality the standards from which were drawn the legitimacy of the various moral maxims or principles of it. Right from the Platonic and Aristotelian period the dualism between reason-passion or mind and body started gaining prominence. Men are identified with rationality and hence contrasted with women whose sphere is believed to be that of emotion and feeling. Through the Association of the masculine values with reason (the feminine with irrational), was created the standard prototype of moral virtues The feminists’ critique of the traditional mainstream Ethics is based on this charge that because of its inherent gender bias, in the name of gender distinctions, Ethics has so far been justifying discriminations. In this paper, attempt would make upon the gender biased-ness of traditional ethics. But Feminists are committed to develop a critique showing how the traditional ethics together with traditional philosophy, irrespective of the claim for gender neutrality, all throughout remained gender-biased. We would try to show to what extent traditional ethics is male centered and consequentially fails to justify its claims for universality and gender neutrality.Keywords: ethics, gender, male-centered, traditional
Procedia PDF Downloads 427245 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290244 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation
Authors: Afshin Kadri
Abstract:
Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter
Procedia PDF Downloads 76243 Different Processing Methods to Obtain a Carbon Composite Element for Cycling
Authors: Maria Fonseca, Ana Branco, Joao Graca, Rui Mendes, Pedro Mimoso
Abstract:
The present work is focused on the production of a carbon composite element for cycling through different techniques, namely, blow-molding and high-pressure resin transfer injection (HP-RTM). The main objective of this work is to compare both processes to produce carbon composite elements for the cycling industry. It is well known that the carbon composite components for cycling are produced mainly through blow-molding; however, this technique depends strongly on manual labour, resulting in a time-consuming production process. Comparatively, HP-RTM offers a more automated process which should lead to higher production rates. Nevertheless, a comparison of the elements produced through both techniques must be done, in order to assess if the final products comply with the required standards of the industry. The main difference between said techniques lies in the used material. Blow-moulding uses carbon prepreg (carbon fibres pre-impregnated with a resin system), and the material is laid up by hand, piece by piece, on a mould or on a hard male. After that, the material is cured at a high temperature. On the other hand, in the HP-RTM technique, dry carbon fibres are placed on a mould, and then resin is injected at high pressure. After some research regarding the best material systems (prepregs and braids) and suppliers, an element was designed (similar to a handlebar) to be constructed. The next step was to perform FEM simulations in order to determine what the best layup of the composite material was. The simulations were done for the prepreg material, and the obtained layup was transposed to the braids. The selected material was a prepreg with T700 carbon fibre (24K) and an epoxy resin system, for the blow-molding technique. For HP-RTM, carbon fibre elastic UD tubes and ± 45º braids were used, with both 3K and 6K filaments per tow, and the resin system was an epoxy as well. After the simulations for the prepreg material, the optimized layup was: [45°, -45°,45°, -45°,0°,0°]. For HP-RTM, the transposed layup was [ ± 45° (6k); 0° (6k); partial ± 45° (6k); partial ± 45° (6k); ± 45° (3k); ± 45° (3k)]. The mechanical tests showed that both elements can withstand the maximum load (in this case, 1000 N); however, the one produced through blow-molding can support higher loads (≈1300N against 1100N from HP-RTM). In what concerns to the fibre volume fraction (FVF), the HP-RTM element has a slightly higher value ( > 61% compared to 59% of the blow-molding technique). The optical microscopy has shown that both elements have a low void content. In conclusion, the elements produced using HP-RTM can compare to the ones produced through blow-molding, both in mechanical testing and in the visual aspect. Nevertheless, there is still space for improvement in the HP-RTM elements since the layup of the braids, and UD tubes could be optimized.Keywords: HP-RTM, carbon composites, cycling, FEM
Procedia PDF Downloads 132242 Ethical Artificial Intelligence: An Exploratory Study of Guidelines
Authors: Ahmad Haidar
Abstract:
The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI
Procedia PDF Downloads 93241 Simulation Research of Diesel Aircraft Engine
Authors: Łukasz Grabowski, Michał Gęca, Mirosław Wendeker
Abstract:
This paper presents the simulation results of a new opposed piston diesel engine to power a light aircraft. Created in the AVL Boost, the model covers the entire charge passage, from the inlet up to the outlet. The model shows fuel injection into cylinders and combustion in cylinders. The calculation uses the module for two-stroke engines. The model was created using sub-models available in this software that structure the model. Each of the sub-models is complemented with parameters in line with the design premise. Since engine weight resulting from geometric dimensions is fundamental in aircraft engines, two configurations of stroke were studied. For each of the values, there were calculated selected operating conditions defined by crankshaft speed. The required power was achieved by changing air fuel ratio (AFR). There was also studied brake specific fuel consumption (BSFC). For stroke S1, the BSFC was lowest at all of the three operating points. This difference is approximately 1-2%, which means higher overall engine efficiency but the amount of fuel injected into cylinders is larger by several mg for S1. The cylinder maximum pressure is lower for S2 due to the fact that compressor gear driving remained the same and boost pressure was identical in the both cases. Calculations for various values of boost pressure were the next stage of the study. In each of the calculation case, the amount of fuel was changed to achieve the required engine power. In the former case, the intake system dimensions were modified, i.e. the duct connecting the compressor and the air cooler, so its diameter D = 40 mm was equal to the diameter of the compressor outlet duct. The impact of duct length was also examined to be able to reduce the flow pulsation during the operating cycle. For the so selected geometry of the intake system, there were calculations for various values of boost pressure. The boost pressure was changed by modifying the gear driving the compressor. To reach the required level of cruising power N = 68 kW. Due to the mechanical power consumed by the compressor, high pressure ratio results in a worsened overall engine efficiency. The figure on the change in BSFC from 210 g/kWh to nearly 270 g/kWh shows this correlation and the overall engine efficiency is reduced by about 8%. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: aircraft, diesel, engine, simulation
Procedia PDF Downloads 207240 Soil Improvement through Utilization of Calcifying Bhargavaea cecembensis N1 in an Affordable Whey Culture Medium
Authors: Fatemeh Elmi, Zahra Etemadifar
Abstract:
Improvement of soil mechanical properties is crucial before its use in construction, as the low mechanical strength and unstable structure of soil in many parts of the world can lead to the destruction of engineering infrastructure, resulting in financial and human losses. Although, conventional methods, such as chemical injection, are often utilized to enhance soil strength and stiffness, they are generally expensive, require heavy machinery, and cause significant environmental effects due to chemical usage, and also disrupt urban infrastructure. Moreover, they are not suitable for treating large volume of soil. Recently, an alternative method to improve various soil properties, including strength, hardness, and permeability, has received much attention: the application of biological methods. One of the most widely used is biocementation, which is based on the microbial precipitation of calcium carbonte crystalls using ureolytic bacteria However, there are still limitations to its large-scale use that need to be resolved before it can be commercialized. These issues have not received enough attention in prior research. One limitation of MICP (microbially induced calcium carbonate precipitation) is that microorganisms cannot operate effectively in harsh and variable environments, unlike the controlled conditions of a laboratory. Another limitation of applying this technique on a large scale is the high cost of producing a substantial amount of bacterial culture and reagents required for soil treatment. Therefore, the purpose of the present study was to investigate soil improvement using the biocementation activity of poly-extremophile, calcium carbonate crystal- producing bacterial strain, Bhargavaea cecembensis N1, in whey as an inexpensive medium. This strain was isolated and molecularly identified from sandy soils in our previous research, and its 16S rRNA gene sequences was deposited in the NCBI Gene Bank with an accession number MK420385. This strain exhibited a high level of urease activity (8.16 U/ml) and produced a large amount of calcium carbonate (4.1 mg/ ml). It was able to improve the soil by increasing the compressive strength up to 205 kPa and reducing permeability by 36%, with 20% of the improvement attributable of calcium carbonate production. This was achieved using this strain in a whey culture medium. This strain can be an eco-friendly and economical alternative to conventional methods in soil stabilization, and other MICP related applications.Keywords: biocementation, Bhargavaea cecembensis, soil improvement, whey culture medium
Procedia PDF Downloads 54239 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants
Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver
Abstract:
This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants
Procedia PDF Downloads 352238 Understanding the Role of Nitric Oxide Synthase 1 in Low-Density Lipoprotein Uptake by Macrophages and Implication in Atherosclerosis Progression
Authors: Anjali Roy, Mirza S. Baig
Abstract:
Atherosclerosis is a chronic inflammatory disease characterized by the formation of lipid rich plaque enriched with necrotic core, modified lipid accumulation, smooth muscle cells, endothelial cells, leucocytes and macrophages. Macrophage foam cells play a critical role in the occurrence and development of inflammatory atherosclerotic plaque. Foam cells are the fat-laden macrophages in the initial stage atherosclerotic lesion formation. Foam cells are an indication of plaque build-up, or atherosclerosis, which is commonly associated with increased risk of heart attack and stroke as a result of arterial narrowing and hardening. The mechanisms that drive atherosclerotic plaque progression remain largely unknown. Dissecting the molecular mechanism involved in process of macrophage foam cell formation will help to develop therapeutic interventions for atherosclerosis. To investigate the mechanism, we studied the role of nitric oxide synthase 1(NOS1)-mediated nitric oxide (NO) on low-density lipoprotein (LDL) uptake by bone marrow derived macrophages (BMDM). Using confocal microscopy, we found that incubation of macrophages with NOS1 inhibitor, TRIM (1-(2-Trifluoromethylphenyl) imidazole) or L-NAME (N omega-nitro-L-arginine methyl ester) prior to LDL treatment significantly reduces the LDL uptake by BMDM. Further, addition of NO donor (DEA NONOate) in NOS1 inhibitor treated macrophages recovers the LDL uptake. Our data strongly suggest that NOS1 derived NO regulates LDL uptake by macrophages and foam cell formation. Moreover, we also checked proinflammatory cytokine mRNA expression through real time PCR in BMDM treated with LDL and copper oxidized LDL (OxLDL) in presences and absences of inhibitor. Normal LDL does not evoke cytokine expression whereas OxLDL induced proinflammatory cytokine expression which significantly reduced in presences of NOS1 inhibitor. Rapid NOS-1-derived NO and its stable derivative formation act as signaling agents for inducible NOS-2 expression in endothelial cells, leading to endothelial vascular wall lining disruption and dysfunctioning. This study highlights the role of NOS1 as critical players of foam cell formation and would reveal much about the key molecular proteins involved in atherosclerosis. Thus, targeting NOS1 would be a useful strategy in reducing LDL uptake by macrophages at early stage of disease and hence dampening the atherosclerosis progression.Keywords: atherosclerosis, NOS1, inflammation, oxidized LDL
Procedia PDF Downloads 127237 Gluten Intolerance, Celiac Disease, and Neuropsychiatric Disorders: A Translational Perspective
Authors: Jessica A. Hellings, Piyushkumar Jani
Abstract:
Background: Systemic autoimmune disorders are increasingly implicated in neuropsychiatric illness, especially in the setting of treatment resistance in individuals of all ages. Gluten allergy in fullest extent results in celiac disease, affecting multiple organs including central nervous system (CNS). Clinicians often lack awareness of the association between neuropsychiatric illness and gluten allergy, partly since many such research studies are published in immunology and gastroenterology journals. Methods: Following a Pubmed literature search and online searches on celiac disease websites, 40 articles are critically reviewed in detail. This work reviews celiac disease, gluten intolerance and current evidence of their relationship to neuropsychiatric and systemic illnesses. The review also covers current work-up and diagnosis, as well as dietary interventions, gluten restriction outcomes, and future research directions. Results: Gluten allergy in susceptible individuals damages the small intestine, producing a leaky gut and malabsorption state, as well as allowing antibodies into the bloodstream, which attack major organs. Lack of amino acid precursors for neurotransmitter synthesis together with antibody-associated brain changes and hypoperfusion may result in neuropsychiatric illness. This is well documented; however, studies in neuropsychiatry are often small. In the large CATIE trial, subjects with schizophrenia had significantly increased antibodies to tissue transglutaminase (TTG), and antigliadin antibodies, both significantly greater gluten antibodies than in control subjects. On later follow up, TTG-6 antibodies were identified in these subjects’ brains but not in their intestines. Significant evidence mostly from small studies also exists for gluten allergy and celiac-related depression, anxiety disorders, attention-deficit/hyperactivity disorder, autism spectrum disorders, ataxia, and epilepsy. Dietary restriction of gluten resulted in remission in several published cases, including for treatment-resistant schizophrenia. Conclusions: Ongoing and larger studies are needed of the diagnosis and treatment efficacy of the gluten-free diet in neuropsychiatric illness. Clinicians should ask about the patient history of anemia, hypothyroidism, irritable bowel syndrome and family history of benefit from the gluten-free diet, not limited to but especially in cases of treatment resistance. Obtaining gluten antibodies by a simple blood test, and referral for gastrointestinal work-up in positive cases should be considered.Keywords: celiac, gluten, neuropsychiatric, translational
Procedia PDF Downloads 161236 Effects of Gender on Kinematics Kicking in Soccer
Authors: Abdolrasoul Daneshjoo
Abstract:
Soccer is a game which draws more attention in different countries especially in Brazil. Kicking among different skills in soccer and soccer players is an excellent role for the success and preference of a team. The way of point gaining in this game is passing the ball over the goal lines which are gained by shoot skill in attack time and or during the penalty kicks.Regarding the above assumption, identifying the effective factors in instep kicking in different distances shoot with maximum force and high accuracy or pass and penalty kick, may assist the coaches and players in raising qualitative level of performing the skill.The aim of the present study was to study of a few kinematical parameters in instep kicking from 5 and 7 meter distance among the male and female elite soccer players.24 right dominant lower limb subjects (12 males and 12 females) among Tehran elite soccer players with average and the standard deviation (22.5 ± 1.5) & (22.08± 1.31) years, height of (179.5 ± 5.81) & (164.3 ± 4.09) cm, weight of (69.66 ± 4.09) & (53.16 ± 3.51) kg, %BMI (21.06 ± .731) & (19.67 ± .709), having playing history of (4 ± .73) & (3.08 ± .66) years respectively participated in this study. They had at least two years of continuous playing experience in Tehran soccer league.For sampling player's kick; Kinemetrix Motion analysis with three cameras with 1000 Hz was used. Five reflective markers were placed laterally on the kicking leg over anatomical points (the iliac crest, major trochanter, lateral epicondyle of femur, lateral malleolus, and lateral aspect of distal head of the fifth metatarsus). Instep kick was filmed, with one step approach and 30 to 45 degrees angle from stationary ball. Three kicks were filmed, one kick selected for further analyses. Using Kinemetrix 3D motion analysis software, the position of the markers was analyzed. Descriptive statistics were used to describe the mean and standard deviation, while the analysis of variance, and independent t-test (P < 0.05) were used to compare the kinematic parameters between two genders.Among the evaluated parameters, the knee acceleration, the thigh angular velocity, the angle of knee proportionately showed significant relationship with consequence of kick. While company performance on 5m in 2 genders, significant differences were observed in internal – external displacement of toe, ankle, hip and the velocity of toe, ankle and the acceleration of toe and the angular velocity of pelvic, thigh and before time contact . Significant differences showed the internal – external displacement of toe, the ankle, the knee and the hip, the iliac crest and the velocity of toe, the ankle and acceleration of ankle and angular velocity of the pelvic and the knee.Keywords: biomechanics, kinematics, instep kicking, soccer
Procedia PDF Downloads 502235 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System
Authors: Nareshkumar Harale, B. B. Meshram
Abstract:
The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design
Procedia PDF Downloads 227234 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing
Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş
Abstract:
Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model
Procedia PDF Downloads 83233 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 350232 The Incoherence of the Philosophers as a Defense of Philosophy against Theology
Authors: Edward R. Moad
Abstract:
Al-Ghazali’s Tahāfat al Falāsifa is widely construed as an attack on philosophy in favor of theological fideism. Consequently, he has been blamed for ‘death of philosophy’ in the Muslim world. ‘Falsifa’ however is not philosophy itself, but rather a range of philosophical doctrines mainly influenced by or inherited form Greek thought. In these terms, this work represents a defense of philosophy against what we could call ‘falsifical’ fideism. In the introduction, Ghazali describes his target audience as, not the falasifa, but a group of pretenders engaged in taqlid to a misconceived understanding of falasifa, including the belief that they were capable of demonstrative certainty in the field of metaphysics. He promises to use falsifa standards of logic (with which he independently agrees), to show that that the falasifa failed to demonstratively prove many of their positions. Whether or not he succeeds in that, the exercise of subjecting alleged proofs to critical scrutiny is quintessentially philosophical, while uncritical adherence to a doctrine, in the name of its being ‘philosophical’, is decidedly unphilosophical. If we are to blame the intellectual decline of the Muslim world on someone’s ‘bad’ way of thinking, rather than more material historical circumstances (which is already a mistake), then blame more appropriately rests with modernist Muslim thinkers who, under the influence of orientalism (and like Ghazali’s philosophical pretenders) mistook taqlid to the falasifa as philosophy itself. The discussion of the Tahāfut takes place in the context of an epistemic (and related social) hierarchy envisioned by the falasifa, corresponding to the faculties of the sense, the ‘estimative imagination’ (wahm), and the pure intellect, along with the respective forms of discourse – rhetoric, dialectic, and demonstration – appropriate to each category of that order. Al-Farabi in his Book of Letters describes a relation between dialectic and demonstration on the one hand, and theology and philosophy on the other. The latter two are distinguished by method rather than subject matter. Theology is that which proceeds dialectically, while philosophy is (or aims to be?) demonstrative. Yet, Al-Farabi tells us, dialectic precedes philosophy like ‘nourishment for the tree precedes its fruit.’ That is, dialectic is part of the process, by which we interrogate common and imaginative notions in the pursuit of clearly understood first principles that we can then deploy in the demonstrative argument. Philosophy is, therefore, something we aspire to through, and from a discursive condition of, dialectic. This stands in apparent contrast to the understanding of Ibn Sina, for whom one arrives at the knowledge of first principles through contact with the Active Intellect. It also stands in contrast to that of Ibn Rushd, who seems to think our knowledge of first principles can only come through reading Aristotle. In conclusion, based on Al-Farabi’s framework, Ghazali’s Tahafut is a truly an exercise in philosophy, and an effort to keep the door open for true philosophy in the Muslim mind, against the threat of a kind of developing theology going by the name of falsifa.Keywords: philosophy, incoherence, theology, Tahafut
Procedia PDF Downloads 161231 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 203230 Poisoning in Morocco: Evolution and Risk Factors
Authors: El Khaddam Safaa, Soulaymani Abdelmajid, Mokhtari Abdelghani, Ouammi Lahcen, Rachida Soulaymani-Beincheikh
Abstract:
The poisonings represent a problem of health in the world and Morocco, The exact dimensions of this phenomenon are still poorly recorded that we see the lack of exhaustive statistical data. The objective of this retrospective study of a series of cases of the poisonings declared at the level of the region of Tadla-Azilal and collected by the Moroccan Poison Control and Pharmacovigilance Center. An epidemiological profile of the poisonings was to raise, to determine the risk factors influencing the vital preview of the poisoned And to follow the evolution of the incidence, the lethality, and the mortality. During the period of study, we collected and analyzed 9303 cases of poisonings by different incriminated toxic products with the exception of the scorpion poisonings. These poisonings drove to 99 deaths. The epidemiological profile which we raised, showed that the poisoned were of any age with an average of 24.62±16.61 years, The sex-ratio (woman/man) was 1.36 in favor of the women. The difference between both sexes is highly significant (χ2 = 210.5; p<0,001). Most of the poisoned which declared to be of urban origin (60.5 %) (χ2=210.5; p<0,001). Carbon monoxide was the most incriminated among the cases of poisonings (24.15 %), them putting in head, followed by some pesticides and farm produces (21.44 %) and food (19.95 %). The analysis of the risk factors showed that the grown-up patients whose age is between 20 and 74 years have twice more risk of evolving towards the death (RR=1,57; IC95 % = 1,03-2,38) than the other age brackets, so the male genital organ was the most exposed (explained) to the death that the female genital organ (RR=1,59; IC95 % = 1,07-2,38) The patients of rural origin had presented 5 times more risk (RR=4,713; IC95 % = 2,543-8,742). Poisoned by the mineral products had presented the maximum of risk on the vital preview death (RR=23,19, IC95 % = 2,39-224,1). The poisonings by pesticides produce a risk of 9 (RR=9,31; IC95 % = 6,10-14,18). The incidence was 3,3 cases of 10000 inhabitants, and the mortality was 0,004 cases of 1000 inhabitants (that is 4 cases by 1000 000 inhabitants). The rate of lethality registered annually was 10.6 %. The evolution of the indicators of health according to the years showed that the rate of statement measured by the incidence increased by a significant way. We also noted an improvement in the coverage which (who) ended up with a decrease in the rate of the lethality and the mortality during last years. The fight anti-toxic is a work of length time. He asks for a lot of work various levels. It is necessary to attack the delay accumulated by our country on the various legal, institutional and technical aspects. The ideal solution is to develop and to set up a national strategy.Keywords: epidemiology, poisoning, risk factors, indicators of health, Tadla-Azilal grated by anti-toxic fight
Procedia PDF Downloads 365229 Neuro-Epigenetic Changes on Diabetes Induced-Synaptic Fidelity in Brain
Authors: Valencia Fernandes, Dharmendra Kumar Khatri, Shashi Bala Singh
Abstract:
Background and Aim: Epigenetics are the inaudible signatures of several pathological processes in the brain. This study understands the influence of DNA methylation, a major epigenetic modification, in the prefrontal cortex and hippocampus of the diabetic brain and its notable effect on the cellular chaperones and synaptic proteins. Method: Chronic high fat diet and STZ-induced diabetic mice were studied for cognitive dysfunction, and global DNA methylation, as well as DNA methyltransferase (DNMT) activity, were assessed. Further, the cellular chaperones and synaptic proteins were examined using DNMT inhibitor, 5-aza-2′-deoxycytidine (5-aza-dC)-via intracerebroventricular injection. Moreover, % methylation of these synaptic proteins were also studied so as to correlate its epigenetic involvement. Computationally, its interaction with the DNMT enzyme were also studied using bioinformatic tools. Histological studies for morphological alterations and neuronal degeneration were also studied. Neurogenesis, a characteristic marker for new learning and memory formation, was also assessed via the BrdU staining. Finally, the most important behavioral studies, including the Morris water maze, Y maze, passive avoidance, and Novel object recognition test, were performed to study its cognitive functions. Results: Altered global DNA methylation and increased levels of DNMTs within the nucleus were confirmed in the cortex and hippocampus of the diseased mice, suggesting hypermethylation at a genetic level. Treatment with AzadC, a global DNA demethylating agent, ameliorated the protein and gene expression of the cellular chaperones and synaptic fidelity. Furthermore, the methylation analysis profile showed hypermethylation of the hsf1 protein, a master regulator for chaperones and thus, confirmed the epigenetic involvement in the diseased brain. Morphological improvements and decreased neurodegeneration, along with enhanced neurogenesis in the treatment group, suggest that epigenetic modulations do participate in learning and memory. This is supported by the improved behavioral test battery seen in the treatment group. Conclusion: DNA methylation could possibly accord in dysregulating the memory-associated proteins at chronic stages in type 2 diabetes. This could suggest a substantial contribution to the underlying pathophysiology of several metabolic syndromes like insulin resistance, obesity and also participate in transitioning this damage centrally, such as cognitive dysfunction.Keywords: epigenetics, cognition, chaperones, DNA methylation
Procedia PDF Downloads 204228 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms
Authors: Dimitrios Kafetzopoulos
Abstract:
Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.Keywords: incremental operational changes, radical operational changes, efficiency, sustainability
Procedia PDF Downloads 135227 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 153226 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 418225 Effective Service Provision and Multi-Agency Working in Service Providers for Children and Young People with Special Educational Needs and Disabilities: A Mixed Methods Systematic Review
Authors: Natalie Tyldesley-Marshall, Janette Parr, Anna Brown, Yen-Fu Chen, Amy Grove
Abstract:
It is widely recognised in policy and research that the provision of services for children and young people (CYP) with Special Educational Needs and Disabilities (SEND) is enhanced when health and social care, and education services collaborate and interact effectively. In the UK, there have been significant changes to policy and provisions which support and improve collaboration. However, professionals responsible for implementing these changes face multiple challenges, including a lack of specific implementation guidance or framework to illustrate how effective multi-agency working could or should work. This systematic review will identify the key components of effective multi-agency working in services for CYP with SEND; and the most effective forms of partnership working in this setting. The review highlights interventions that lead to service improvements; and the conditions in the local area that support and encourage success. A protocol was written and registered with PROSPERO registration: CRD42022352194. Searches were conducted on several health, care, education, and applied social science databases from the year 2012 onwards. Citation chaining has been undertaken, as well as broader grey literature searching to enrich the findings. Qualitative, quantitative, mixed methods studies and systematic reviews were included, assessed independently, and critically appraised or assessed for risk of bias using appropriate tools based on study design. Data were extracted in NVivo software and checked by a more experienced researcher. A convergent segregated approach to synthesis and integration was used in which the quantitative and qualitative data were synthesised independently and then integrated using a joint display integration matrix. Findings demonstrate the key ingredients for effective partnership working for services delivering SEND. Interventions deemed effective are described, and lessons learned across interventions are summarised. Results will be of interest to educators and health and social care professionals that provide services to those with SEND. These will also be used to develop policy recommendations for how UK healthcare, social care, and education services for CYP with SEND aged 0-25 can most effectively collaborate and achieve service improvement. The review will also identify any gaps in the literature to recommend areas for future research. Funding for this review was provided by the Department for Education.Keywords: collaboration, joint commissioning, service delivery, service improvement
Procedia PDF Downloads 107224 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)
Authors: Arlene López-Sampson, Tony Page, Betsy Jackes
Abstract:
Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.Keywords: 13C, petiole length, specific leaf area, tree growth
Procedia PDF Downloads 509223 Retrospective Analysis of 142 Cases of Incision Infection Complicated with Sternal Osteomyelitis after Cardiac Surgery Treated by Activated PRP Gel Filling
Authors: Daifeng Hao, Guang Feng, Jingfeng Zhao, Tao Li, Xiaoye Tuo
Abstract:
Objective: To retrospectively analyze the clinical characteristics of incision infection with sternal osteomyelitis sinus tract after cardiac surgery and the operation method and therapeutic effect of filling and repairing with activated PRP gel. Methods: From March 2011 to October 2022, 142 cases of incision infection after cardiac surgery with sternal osteomyelitis sinus were retrospectively analyzed, and the causes of poor wound healing after surgery, wound characteristics, perioperative wound management were summarized. Treatment during operation, collection and storage process of autologous PRP before debridement surgery, PRP filling repair and activation method after debridement surgery, effect of anticoagulant drugs on surgery, postoperative complications and average wound healing time, etc.. Results: Among the cases in this group, 53.3% underwent coronary artery bypass grafting, 36.8% underwent artificial heart valve replacement, 8.2% underwent aortic artificial vessel replacement, and 1.7% underwent allogeneic heart transplantation. The main causes of poor incision healing were suture reaction, fat liquefaction, osteoporosis, diabetes, and metal allergy in sequence. The wound is characterized by an infected sinus tract. Before the operation, 100-150ml of PRP with 4 times the physiological concentration was collected separately with a blood component separation device. After sinus debridement, PRP was perfused to fill the bony defect in the middle of the sternum, activated with thrombin freeze-dried powder and calcium gluconate injection to form a gel, and the outer skin and subcutaneous tissue were sutured freely. 62.9% of patients discontinued warfarin during the perioperative period, and 37.1% of patients maintained warfarin treatment. There was no significant difference in the incidence of postoperative wound hematoma. The average postoperative wound healing time was 12.9±4.7 days, and there was no obvious postoperative complication. Conclusions: Application of activated PRP gel to fill incision infection with sternal osteomyelitis sinus after cardiac surgery has a less surgical injury and satisfactory and stable curative effect. It can completely replace the previously used pectoralis major muscle flap transplantation operation scheme.Keywords: platelet-rich plasma, negative-pressure wound therapy, sternal osteomyelitis, cardiac surgery
Procedia PDF Downloads 78222 Factors Affecting Air Surface Temperature Variations in the Philippines
Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya
Abstract:
Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number
Procedia PDF Downloads 323221 Lying in a Sender-Receiver Deception Game: Effects of Gender and Motivation to Deceive
Authors: Eitan Elaad, Yeela Gal-Gonen
Abstract:
Two studies examined gender differences in lying when the truth-telling bias prevailed and when inspiring lying and distrust. The first study used 156 participants from the community (78 pairs). First, participants completed the Narcissistic Personality Inventory, the Lie- and Truth Ability Assessment Scale (LTAAS), and the Rational-Experiential Inventory. Then, they participated in a deception game where they performed as senders and receivers of true and false communications. Their goal was to retain as many points as possible according to a payoff matrix that specified the reward they would gain for any possible outcome. Results indicated that males in the sender position lied more and were more successful tellers of lies and truths than females. On the other hand, males, as receivers, trusted less than females but were not better at detecting lies and truths. We explained the results by a. Male's high perceived lie-telling ability. We observed that confidence in telling lies guided participants to increase their use of lies. Male's lie-telling confidence corresponded to earlier accounts that showed a consistent association between high self-assessed lying ability, reports of frequent lying, and predictions of actual lying in experimental settings; b. Male's narcissistic features. Earlier accounts described positive relations between narcissism and reported lying or unethical behavior in everyday life situations. Predictions about the association between narcissism and frequent lying received support in the present study. Furthermore, males scored higher than females on the narcissism scale; and c. Male's experiential thinking style. We observed that males scored higher than females on the experiential thinking style scale. We further hypothesized that the experiential thinking style predicts frequent lying in the deception game. Results confirmed the hypothesis. The second study used one hundred volunteers (40 females) who underwent the same procedure. However, the payoff matrix encouraged lying and distrust. Results showed that male participants lied more than females. We found no gender differences in trust. Males and females did not differ in their success of telling and detecting lies and truths. Participants also completed the LTAAS questionnaire. Males assessed their lie-telling ability higher than females, but the ability assessment did not predict lying frequency. A final note. The present design is limited to low stakes. Participants knew that they were participating in a game, and they would not experience any consequences from their deception in the game. Therefore, we advise caution when applying the present results to lying under high stakes.Keywords: gender, lying, detection of deception, information processing style, self-assessed lying ability
Procedia PDF Downloads 148220 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition
Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed
Abstract:
Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil
Procedia PDF Downloads 337219 Giant Cancer Cell Formation: A Link between Cell Survival and Morphological Changes in Cancer Cells
Authors: Rostyslav Horbay, Nick Korolis, Vahid Anvari, Rostyslav Stoika
Abstract:
Introduction: Giant cancer cells (GCC) are common in all types of cancer, especially after poor therapy. Some specific features of such cells include ~10-fold enlargement, drug resistance, and the ability to propagate similar daughter cells. We used murine NK/Ly lymphoma, an aggressive and fast growing lymphoma model that has already shown drastic changes in GCC comparing to parental cells (chromatin condensation, nuclear fragmentation, tighter OXPHOS/cellular respiration coupling, multidrug resistance). Materials and methods: In this study, we compared morpho-functional changes of GCC that predominantly show either a cytostatic or a cytotoxic effect after treatment with drugs. We studied the effect of a combined cytostatic/cytotoxic drug treatment to determine the correlation of drug efficiency and GCC formation. Doses of G1/S-specific drug paclitaxel/PTX (G2/M-specific, 50 mg/mouse), vinblastine/VBL (50 mg/mouse), and DNA-targeting agents doxorubicin/DOX (125 ng/mouse) and cisplatin/CP (225 ng/mouse) on C57 black mice. Several tests were chosen to estimate morphological and physiological state (propidium iodide, Rhodamine-123, DAPI, JC-1, Janus Green, Giemsa staining and other), which included cell integrity, nuclear fragmentation and chromatin condensation, mitochondrial activity, and others. A single and double factor ANOVA analysis were performed to determine correlation between the criteria of applied drugs and cytomorphological changes. Results: In all cases of treatment, several morphological changes were observed (intracellular vacuolization, membrane blebbing, and interconnected mitochondrial network). A lower gain in ascites (49.97% comparing to control group) and longest lifespan (22+9 days) after tumor injection was obtained with single VBL and single DOX injections. Such ascites contained the highest number of GCC (83.7%+9.2%), lowest cell count number (72.7+31.0 mln/ml), and a strong correlation coefficient between increased mitochondrial activity and percentage of giant NK/Ly cells. A high number of viable GCC (82.1+9.2%) was observed compared to the parental forms (15.4+11.9%) indicating that GCC are more drug resistant than the parental cells. All this indicates that the giant cell formation and its ability to obtain drug resistance is an expanding field in cancer research.Keywords: ANOVA, cisplatin, doxorubicin, drug resistance, giant cancer cells, NK/Ly lymphoma, paclitaxel, vinblastine
Procedia PDF Downloads 217218 The Relationship between Incidental Emotions, Risk Perceptions and Type of Army Service
Authors: Sharon Garyn-Tal, Shoshana Shahrabani
Abstract:
Military service in general, and in combat units in particular, can be physically and psychologically stressful. Therefore, type of service may have significant implications for soldiers during and after their military service including emotions, judgments and risk perceptions. Previous studies have focused on risk propensity and risky behavior among soldiers, however there is still lack of knowledge on the impact of type of army service on risk perceptions. The current study examines the effect of type of army service (combat versus non-combat service) and negative incidental emotions on risk perceptions. In 2014 a survey was conducted among 153 combat and non-combat Israeli soldiers. The survey was distributed in train stations and central bus stations in various places in Israel among soldiers waiting for the train/bus. Participants answered questions related to the levels of incidental negative emotions they felt, to their risk perceptions (chances to be hurt by terror attack, by violent crime and by car accident), and personal details including type of army service. The data in this research is unique because military service in Israel is compulsory, so that the Israeli population serving in the army is wide and diversified. The results indicate that currently serving combat participants were more pessimistic in their risk perceptions (for all type of risks) compared to the currently serving non-combat participants. Since combat participants probably experienced severe and distressing situations during their service, they became more pessimistic regarding their probabilities of being hurt in different situations in life. This result supports the availability heuristic theory and the findings of previous studies indicating that those who directly experience distressing events tend to overestimate danger. The findings also indicate that soldiers who feel higher levels of incidental fear and anger have pessimistic risk perceptions. In addition, respondents who experienced combat army service also have pessimistic risk perceptions if they feel higher levels of fear. In addition, the findings suggest that higher levels of the incidental emotions of fear and anger are related to more pessimistic risk perceptions. These results can be explained by the compulsory army service in Israel that constitutes a focused threat to soldiers' safety during their period of service. Thus, in this stressful environment, negative incidental emotions even during routine times correlate with higher risk perceptions. In conclusion, the current study results suggest that combat army service shapes risk perceptions and the way young people control their negative incidental emotions in everyday life. Recognizing the factors affecting risk perceptions among soldiers is important for better understanding the impact of army service on young people.Keywords: army service, combat soldiers, incidental emotions, risk perceptions
Procedia PDF Downloads 234