Search results for: speed hump detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6103

Search results for: speed hump detection

553 Comparison between Experimental and Numerical Studies of Fully Encased Composite Columns

Authors: Md. Soebur Rahman, Mahbuba Begum, Raquib Ahsan

Abstract:

Composite column is a structural member that uses a combination of structural steel shapes, pipes or tubes with or without reinforcing steel bars and reinforced concrete to provide adequate load carrying capacity to sustain either axial compressive loads alone or a combination of axial loads and bending moments. Composite construction takes the advantages of the speed of construction, light weight and strength of steel, and the higher mass, stiffness, damping properties and economy of reinforced concrete. The most usual types of composite columns are the concrete filled steel tubes and the partially or fully encased steel profiles. Fully encased composite column (FEC) provides compressive strength, stability, stiffness, improved fire proofing and better corrosion protection. This paper reports experimental and numerical investigations of the behaviour of concrete encased steel composite columns subjected to short-term axial load. In this study, eleven short FEC columns with square shaped cross section were constructed and tested to examine the load-deflection behavior. The main variables in the test were considered as concrete compressive strength, cross sectional size and percentage of structural steel. A nonlinear 3-D finite element (FE) model has been developed to analyse the inelastic behaviour of steel, concrete, and longitudinal reinforcement as well as the effect of concrete confinement of the FEC columns. FE models have been validated against the current experimental study conduct in the laboratory and published experimental results under concentric load. It has been observed that FE model is able to predict the experimental behaviour of FEC columns under concentric gravity loads with good accuracy. Good agreement has been achieved between the complete experimental and the numerical load-deflection behaviour in this study. The capacities of each constituent of FEC columns such as structural steel, concrete and rebar's were also determined from the numerical study. Concrete is observed to provide around 57% of the total axial capacity of the column whereas the steel I-sections contributes to the rest of the capacity as well as ductility of the overall system. The nonlinear FE model developed in this study is also used to explore the effect of concrete strength and percentage of structural steel on the behaviour of FEC columns under concentric loads. The axial capacity of FEC columns has been found to increase significantly by increasing the strength of concrete.

Keywords: composite, columns, experimental, finite element, fully encased, strength

Procedia PDF Downloads 282
552 Technical and Economic Potential of Partial Electrification of Railway Lines

Authors: Rafael Martins Manzano Silva, Jean-Francois Tremong

Abstract:

Electrification of railway lines allows to increase speed, power, capacity and energetic efficiency of rolling stocks. However, this process of electrification is complex and costly. An electrification project is not just about design of catenary. It also includes installation of structures around electrification, as substation installation, electrical isolation, signalling, telecommunication and civil engineering structures. France has more than 30,000 km of railways, whose only 53% are electrified. The others 47% of railways use diesel locomotive and represent only 10% of the circulation (tons.km). For this reason, a new type of electrification, less expensive than the usual, is requested to enable the modernization of these railways. One solution could be the use of hybrids trains. This technology opens up new opportunities for less expensive infrastructure development such as the partial electrification of railway lines. In a partially electrified railway, the power supply of theses hybrid trains could be made either by the catenary or by the on-board energy storage system (ESS). Thus, the on-board ESS would feed the energetic needs of the train along the non-electrified zones while in electrified zones, the catenary would feed the train and recharge the on-board ESS. This paper’s objective deals with the technical and economic potential identification of partial electrification of railway lines. This study provides different scenarios of electrification by replacing the most expensive places to electrify using on-board ESS. The target is to reduce the cost of new electrification projects, i.e. reduce the cost of electrification infrastructures while not increasing the cost of rolling stocks. In this study, scenarios are constructed in function of the electrification’s cost of each structure. The electrification’s cost varies considerably because of the installation of catenary support in tunnels, bridges and viaducts is much more expensive than in others zones of the railway. These scenarios will be used to describe the power supply system and to choose between the catenary and the on-board energy storage depending on the position of the train on the railway. To identify the influence of each partial electrification scenario in the sizing of the on-board ESS, a model of the railway line and of the rolling stock is developed for a real case. This real case concerns a railway line located in the south of France. The energy consumption and the power demanded at each point of the line for each power supply (catenary or on-board ESS) are provided at the end of the simulation. Finally, the cost of a partial electrification is obtained by adding the civil engineering costs of the zones to be electrified plus the cost of the on-board ESS. The study of the technical and economic potential ends with the identification of the most economically interesting scenario of electrification.

Keywords: electrification, hybrid, railway, storage

Procedia PDF Downloads 415
551 Friction and Wear Characteristics of Diamond Nanoparticles Mixed with Copper Oxide in Poly Alpha Olefin

Authors: Ankush Raina, Ankush Anand

Abstract:

Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.

Keywords: boundary lubrication, copper oxide, friction, nano diamond

Procedia PDF Downloads 110
550 The Implementation of Inclusive Education in Collaboration between Teachers of Special Education Classes and Regular Classes in a Preschool

Authors: Chiou-Shiue Ko

Abstract:

As is explicitly stipulated in Article 7 of the Enforcement Rules of the Special Education Act as amended in 1998, "in principle, children with disabilities should be integrated with normal children for preschool education". Since then, all cities and counties have been committed to promoting preschool inclusive education. The Education Department, New Taipei City Government, has been actively recruiting advisory groups of professors to assist in the implementation of inclusive education in preschools since 2001. Since 2011, the author of this study has been guiding Preschool Rainbow to implement inclusive education. Through field observations, meetings, and teaching demonstration seminars, this study explored the process of how inclusive education has been successfully implemented in collaboration with teachers of special education classes and regular classes in Preschool Rainbow. The implementation phases for inclusive education in a single academic year include the following: 1) Preparatory stage. Prior to implementation, teachers in special education and regular classes discuss ways of conducting inclusive education and organize reading clubs to read books related to curriculum modifications that integrate the eight education strategies, early treatment and education, and early childhood education programs to enhance their capacity to implement and compose teaching plans for inclusive education. In addition to the general objectives of inclusive education, the objective of inclusive education for special children is also embedded into the Individualized Education Program (IEP). 2) Implementation stage. Initially, a promotional program for special education is implemented for the children to allow all the children in the preschool to understand their own special qualities and those of special children. After the implementation of three weeks of reverse inclusion, the children in the special education classes are put into groups and enter the regular classes twice a week to implement adjustments to their inclusion in the learning area and the curriculum. In 2013, further cooperation was carried out with adjacent hospitals to perform development screening activities for the early detection of children with developmental delays. 3) Review and reflection stage. After the implementation of inclusive education, all teachers in the preschool are divided into two groups to record their teaching plans and the lessons learned during implementation. The effectiveness of implementing the objective of inclusive education is also reviewed. With the collaboration of all teachers, in 2015, Preschool Rainbow won New Taipei City’s “Preschool Light” award as an exceptional model for inclusive education. Its model of implementing inclusive education can be used as a reference for other preschools.

Keywords: collaboration, inclusive education, preschool, teachers, special education classes, regular classes

Procedia PDF Downloads 416
549 Evaluation of the Surveillance System for Rift Valley Fever in Ruminants in Mauritania, 2019

Authors: Mohamed El Kory Yacoub, Ahmed Bezeid El Mamy Beyatt, Djibril Barry, Yanogo Pauline, Nicolas Meda

Abstract:

Introduction: Rift Valley Fever is a zoonotic arbovirosis that severely affects ruminants, as well as humans. It causes abortions in pregnant females and deaths in young animals. The disease occurs during heavy rains followed by large numbers of mosquito vectors. The objective of this work is to evaluate the surveillance system for Rift Valley Fever. Methods: We conducted an evaluation of the Rift Valley Fiver surveillance system. Data were collected from the analysis of the national database of the Mauritanian Network of Animal Disease Epidemiological Surveillance at the Ministry of Rural Development, of RVF cases notified from the whole national territory, of questionnaires and interviews with all persons involved in RVF surveillance at the central level. The quality of the system was assessed by analyzing the quantitative attributes defined by the Centers for Disease Control and Prevention. Results: In 2019, 443 cases of RVF were notified by the surveillance system, of which 36 were positive. Among the notified cases of Rift Valley Fever, the 0- to the 3-year-old age group of small ruminants was the most represented with 49.21% of cases, followed by 33.33%, which was recorded in large ruminants in the 0 to 7-year-old age group, 11.11% of cases were older than seven years. The completeness of the data varied between 14.2% (age) and 100% (species). Most positive cases were recorded between October and November 2019 in seven different regions. Attribute analysis showed that 87% of the respondents were able to use the case definition well, and 78.8% said they were familiar with the reporting and feedback loop of the Rift Valley Fever data. 90.3% of the respondents found it easy, while 95% of them responded that it was easy for them to transmit their data to the next level. Conclusions: The epidemiological surveillance system for Rift Valley Fever in Mauritania is simple and representative. However, data quality, stability, and responsiveness are average, as the diagnosis of the disease requires laboratory confirmation and the average delay for this confirmation is long (13 days). Consequently, the lack of completeness of the recorded data and of description of cases in terms of time-place-animal, associated with the delay between the stages of the surveillance system can make prevention, early detection of epidemics, and the initiation of measures for an adequate response difficult.

Keywords: evaluation, epidemiological surveillance system, rift valley fever, mauritania, ruminants

Procedia PDF Downloads 141
548 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting

Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero

Abstract:

In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling‎) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.

Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling

Procedia PDF Downloads 127
547 Molecular Farming: Plants Producing Vaccine and Diagnostic Reagent

Authors: Katerina H. Takova, Ivan N. Minkov, Gergana G. Zahmanova

Abstract:

Molecular farming is the production of recombinant proteins in plants with the aim to use the protein as a purified product, crude extract or directly in the planta. Plants gain more attention as expression systems compared to other ones due to the cost effective production of pharmaceutically important proteins, appropriate post-translational modifications, assembly of complex proteins, absence of human pathogens to name a few. In addition, transient expression in plant leaves enables production of recombinant proteins within few weeks. Hepatitis E virus (HEV) is a causative agent of acute hepatitis. HEV causes epidemics in developing countries and is primarily transmitted through the fecal-oral route. Presently, all efforts for development of Hepatitis E vaccine are focused on the Open Read Frame 2 (ORF2) capsid protein as it contains epitopes that can induce neutralizing antibodies. For our purpose, we used the CMPV-based vector-pEAQ-HT for transient expression of HEV ORF2 in Nicotiana benthamina. Different molecular analysis (Western blot and ELISA) showed that HEV ORF2 capsid protein was expressed in plant tissue in high-yield up to 1g/kg of fresh leaf tissue. Electron microscopy showed that the capsid protein spontaneously assembled in low abundance virus-like particles (VLPs), which are highly immunogenic structures and suitable for vaccine development. The expressed protein was recognized by both human and swine HEV positive sera and can be used as a diagnostic reagent for the detection of HEV infection. Production of HEV capsid protein in plants is a promising technology for further HEV vaccine investigations. Here, we reported for a rapid high-yield transient expression of a recombinant protein in plants suitable for vaccine production as well as a diagnostic reagent. Acknowledgments -The authors’ research on HEV is supported with grants from the Project PlantaSYST under the Widening Program, H2020 as well as under the UK Biotechnological and Biological Sciences Research Council (BBSRC) Institute Strategic Programme Grant ‘Understanding and Exploiting Plant and Microbial Secondary Metabolism’ (BB/J004596/1). The authors want to thank Prof. George Lomonossoff (JIC, Norwich, UK) for his contribution.

Keywords: hepatitis E virus, plant molecular farming, transient expression, vaccines

Procedia PDF Downloads 141
546 Hypersonic Propulsion Requirements for Sustained Hypersonic Flight for Air Transportation

Authors: James Rate, Apostolos Pesiridis

Abstract:

In this paper, the propulsion requirements required to achieve sustained hypersonic flight for commercial air transportation are evaluated. In addition, a design methodology is developed and used to determine the propulsive capabilities of both ramjet and scramjet engines. Twelve configurations are proposed for hypersonic flight using varying combinations of turbojet, turbofan, ramjet and scramjet engines. The optimal configuration was determined based on how well each of the configurations met the projected requirements for hypersonic commercial transport. The configurations were separated into four sub-configurations each comprising of three unique derivations. The first sub-configuration comprised four afterburning turbojets and either one or two ramjets idealised for Mach 5 cruise. The number of ramjets required was dependent on the thrust required to accelerate the vehicle from a speed where the turbojets cut out to Mach 5 cruise. The second comprised four afterburning turbojets and either one or two scramjets, similar to the first configuration. The third used four turbojets, one scramjet and one ramjet to aid acceleration from Mach 3 to Mach 5. The fourth configuration was the same as the third, but instead of turbojets, it implemented turbofan engines for the preliminary acceleration of the vehicle. From calculations which determined the fuel consumption at incremental Mach numbers this paper found that the ideal solution would require four turbojet engines and two Scramjet engines. The ideal mission profile was determined as being an 8000km sortie based on an averaging of popular long haul flights with strong business ties, which included Los Angeles to Tokyo, London to New York and Dubai to Beijing. This paper deemed that these routes would benefit from hypersonic transport links based on the previously mentioned factors. This paper has found that this configuration would be sufficient for the 8000km flight to be completed in approximately two and a half hours and would consume less fuel than Concord in doing so. However, this propulsion configuration still result in a greater fuel cost than a conventional passenger. In this regard, this investigation contributes towards the specification of the engine requirements throughout a mission profile for a hypersonic passenger vehicle. A number of assumptions have had to be made for this theoretical approach but the authors believe that this investigation lays the groundwork for appropriate framing of the propulsion requirements for sustained hypersonic flight for commercial air transportation. Despite this, it does serve as a crucial step in the development of the propulsion systems required for hypersonic commercial air transportation. This paper provides a methodology and a focus for the development of the propulsion systems that would be required for sustained hypersonic flight for commercial air transportation.

Keywords: hypersonic, ramjet, propulsion, Scramjet, Turbojet, turbofan

Procedia PDF Downloads 308
545 In Silico Analysis of Salivary miRNAs to Identify the Diagnostic Biomarkers for Oral Cancer

Authors: Andleeb Zahra, Itrat Rubab, Sumaira Malik, Amina Khan, Muhammad Jawad Khan, M. Qaiser Fatmi

Abstract:

Oral squamous cell carcinoma (OSCC) is one of the most common cancers worldwide. Recent studies have highlighted the role of miRNA in disease pathology, indicating its potential use in an early diagnostic tool. miRNAs are small, double stranded, non-coding RNAs that regulate gene expression by deregulating mRNAs. miRNAs play important roles in modifying various cellular processes such as cell growth, differentiation, apoptosis, and immune response. Dis-regulated expression of miRNAs is known to affect the cell growth, and this may function as tumor suppressors or oncogenes in various cancers. Objectives: The main objectives of this study were to characterize the extracellular miRNAs involved in oral cancer (OC) to assist early detection of cancer as well as to propose a list of genes that can potentially be used as biomarkers of OC. We used gene expression data by microarrays already available in literature. Materials and Methods: In the first step, a total of 318 miRNAs involved in oral carcinoma were shortlisted followed by the prediction of their target genes. Simultaneously, the differentially expressed genes (DEGs) of oral carcinoma from all experiments were identified. The common genes between lists of DEGs of OC based on experimentally proven data and target genes of each miRNA were identified. These common genes are the targets of specific miRNA, which is involved in OC. Finally, a list of genes was generated which may be used as biomarker of OC. Results and Conclusion: In results, we included some of pathways in cancer to show the change in gene expression under the control of specific miRNA. Ingenuity pathway analysis (IPA) provided a list of major biomarkers like CDH2, CDK7 and functional enrichment analysis identified the role of miRNA in major pathways like cell adhesion molecules pathway affected by cancer. We observed that at least 25 genes are regulated by maximum number of miRNAs, and thereby, they can be used as biomarkers of OC. To better understand the role of miRNA with respect to their target genes further experiments are required, and our study provides a platform to better understand the miRNA-OC relationship at genomics level.

Keywords: biomarkers, gene expression, miRNA, oral carcinoma

Procedia PDF Downloads 362
544 Familiarity with Flood and Engineering Solutions to Control It

Authors: Hamid Fallah

Abstract:

Undoubtedly, flood is known as a natural disaster, and in practice, flood is considered the most terrible natural disaster in the world both in terms of loss of life and financial losses. From 1988 to 1997, about 390,000 people were killed by natural disasters in the world, 58% of which were related to floods, 26% due to earthquakes, and 16% due to storms and other disasters. The total damages in these 10 years were about 700 billion dollars, which were 33, 29, 28% related to floods, storms and earthquakes, respectively. In this regard, the worrisome point has been the increasing trend of flood deaths and damages in the world in recent decades. The increase in population and assets in flood plains, changes in hydro systems and the destructive effects of human activities have been the main reasons for this increase. During rain and snow, some of the water is absorbed by the soil and plants. A percentage evaporates and the rest flows and is called runoff. Floods occur when the soil and plants cannot absorb the rainfall, and as a result, the natural river channel does not have the capacity to pass the generated runoff. On average, almost 30% of precipitation is converted into runoff, which increases with snow melting. Floods that occur differently create an area called flood plain around the river. River floods are often caused by heavy rains, which in some cases are accompanied by snow melt. A flood that flows in a river without warning or with little warning is called a flash flood. The casualties of these rapid floods that occur in small watersheds are generally more than the casualties of large river floods. Coastal areas are also subject to flooding caused by waves caused by strong storms on the surface of the oceans or waves caused by underground earthquakes. Floods not only cause damage to property and endanger the lives of humans and animals, but also leave other effects. Runoff caused by heavy rains causes soil erosion in the upstream and sedimentation problems in the downstream. The habitats of fish and other animals are often destroyed by floods. The high speed of the current increases the damage. Long-term floods stop traffic and prevent drainage and economic use of land. The supports of bridges, river banks, sewage outlets and other structures are damaged, and there is a disruption in shipping and hydropower generation. The economic losses of floods in the world are estimated at tens of billions of dollars annually.

Keywords: flood, hydrological engineering, gis, dam, small hydropower, suitablity

Procedia PDF Downloads 54
543 Discovering Event Outliers for Drug as Commercial Products

Authors: Arunas Burinskas, Aurelija Burinskiene

Abstract:

On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.

Keywords: drugs, Grubbs' test, outlier, shortage event

Procedia PDF Downloads 127
542 Concordance between Biparametric MRI and Radical Prostatectomy Specimen in the Detection of Clinically Significant Prostate Cancer and Staging

Authors: Rammah Abdlbagi, Egmen Tazcan, Kiriti Tripathi, Vinayagam Sudhakar, Thomas Swallow, Aakash Pai

Abstract:

Introduction and Objectives: MRI has an increasing role in the diagnosis and staging of prostate cancer. Multiparametric MRI includes multiple sequences, including T2 weighting, diffusion weighting, and dynamic contrast enhancement (DCE). Administration of DCE is expensive, time-consuming, and requires medical supervision due to the risk of anaphylaxis. Biparametric MRI (bpMRI), without DCE, overcomes many of these issues; however, there is conflicting data on its accuracy. Furthermore, data on the concordance between bpMRI lesion and pathology specimen, as well as the rates of cancer stage upgrading after surgery, is limited within the available literature. This study aims to examine the diagnostic test accuracy of bpMRI in the diagnosis of prostate cancer and radiological assessment of prostate cancer staging. Specifically, we aimed to evaluate the ability of bpMRI to accurately localise malignant lesions to better understand its accuracy and application in MRI-targeted biopsies. Materials and Methods: One hundred and forty patients who underwent bpMRI prior to radical prostatectomy (RP) were retrospectively reviewed from a single institution. Histological grade from the prostate biopsy was compared with surgical specimens from RP. Clinically significant prostate cancer (csPCa) was defined as Gleason grade group ≥2. bpMRI staging was compared with RP histology. Results: Overall sensitivity of bpMRI in diagnosing csPCa independent of location and staging was 98.87%. Of the 140 patients, 29 (20.71%) had their prostate biopsy histology upgraded at RP. 61 (43.57%) patients had csPca noted on RP specimens in areas that were not identified on the bpMRI. 55 (39.29%) had upstaging after RP from the original staging with bpMRI. Conclusions: Whilst the overall sensitivity of bpMRI in predicting any clinically significant cancer was good, there was notably poor concordance in the location of the tumour between bpMRI and eventual RP specimen. The results suggest that caution should be exercised when using bpMRI for targeted prostate biopsies and validates the continued role of systemic biopsies. Furthermore, a significant number of patients were upstaged at RP from their original staging with bpMRI. Based on these findings, bpMRI results should be interpreted with caution and can underestimate TNM stage, requiring careful consideration of treatment strategy.

Keywords: biparametric MRI, Ca prostate, staging, post prostatectomy histology

Procedia PDF Downloads 53
541 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 157
540 Sequential Padding: A Method to Improve the Impact Resistance in Body Armor Materials

Authors: Ankita Srivastava, Bhupendra S. Butola, Abhijit Majumdar

Abstract:

Application of shear thickening fluid (STF) has been proved to increase the impact resistance performance of the textile structures to further use it as a body armor material. In the present research, STF was applied on Kevlar woven fabric to make the structure lightweight and flexible while improving its impact resistance performance. It was observed that getting a fair amount of add-on of STF on Kevlar fabric is difficult as Kevlar fabric comes with a pre-coating of PTFE which hinders its absorbency. Hence, a method termed as sequential padding is developed in the present study to improve the add-on of STF on Kevlar fabric. Contrary to the conventional process, where Kevlar fabric is treated with STF once using any one pressure, in sequential padding method, the Kevlar fabrics were treated twice in a sequential manner using combination of two pressures together in a sample. 200 GSM Kevlar fabrics were used in the present study. STF was prepared by adding PEG with 70% (w/w) nano-silica concentration. Ethanol was added with the STF at a fixed ratio to reduce viscosity. A high-speed homogenizer was used to make the dispersion. Total nine STF treated Kevlar fabric samples were prepared by using varying combinations and sequences of three levels of padding pressure {0.5, 1.0 and 2.0 bar). The fabrics were dried at 80°C for 40 minutes in a hot air oven to evaporate ethanol. Untreated and STF treated fabrics were tested for add-on%. Impact resistance performance of samples was also tested on dynamic impact tester at a fixed velocity of 6 m/s. Further, to observe the impact resistance performance in actual condition, low velocity ballistic test with 165 m/s velocity was also performed to confirm the results of impact resistance test. It was observed that both add-on% and impact energy absorption of Kevlar fabrics increases significantly with sequential padding process as compared to untreated as well as single stage padding process. It was also determined that impact energy absorption is significantly better in STF treated Kevlar fabrics when 1st padding pressure is higher, and 2nd padding pressure is lower. It is also observed that impact energy absorption of sequentially padded Kevlar fabric shows almost 125% increase in ballistic impact energy absorption (40.62 J) as compared to untreated fabric (18.07 J).The results are owing to the fact that the treatment of fabrics at high pressure during the first padding is responsible for uniform distribution of STF within the fabric structures. While padding with second lower pressure ensures the high add-on of STF for over-all improvement in the impact resistance performance of the fabric. Therefore, it is concluded that sequential padding process may help to improve the impact performance of body armor materials based on STF treated Kevlar fabrics.

Keywords: body armor, impact resistance, Kevlar, shear thickening fluid

Procedia PDF Downloads 230
539 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 146
538 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes

Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini

Abstract:

Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.

Keywords: covid-19, stroke, virtual reality, rehabilitation

Procedia PDF Downloads 133
537 Economic Impact of Drought on Agricultural Society: Evidence Based on a Village Study in Maharashtra, India

Authors: Harshan Tee Pee

Abstract:

Climate elements include surface temperatures, rainfall patterns, humidity, type and amount of cloudiness, air pressure and wind speed and direction. Change in one element can have an impact on the regional climate. The scientific predictions indicate that global climate change will increase the number of extreme events, leading to more frequent natural hazards. Global warming is likely to intensify the risk of drought in certain parts and also leading to increased rainfall in some other parts. Drought is a slow advancing disaster and creeping phenomenon– which accumulate slowly over a long period of time. Droughts are naturally linked with aridity. But droughts occur over most parts of the world (both wet and humid regions) and create severe impacts on agriculture, basic household welfare and ecosystems. Drought condition occurs at least every three years in India. India is one among the most vulnerable drought prone countries in the world. The economic impacts resulting from extreme environmental events and disasters are huge as a result of disruption in many economic activities. The focus of this paper is to develop a comprehensive understanding about the distributional impacts of disaster, especially impact of drought on agricultural production and income through a panel study (drought year and one year after the drought) in Raikhel village, Maharashtra, India. The major findings of the study indicate that cultivating area as well as the number of cultivating households reduced after the drought, indicating a shift in the livelihood- households moved from agriculture to non-agriculture. Decline in the gross cropped area and production of various crops depended on the negative income from these crops in the previous agriculture season. All the landholding categories of households except landlords had negative income in the drought year and also the income disparities between the households were higher in that year. In the drought year, the cost of cultivation was higher for all the landholding categories due to the increased cost for irrigation and input cost. In the drought year, agriculture products (50 per cent of the total products) were used for household consumption rather than selling in the market. It is evident from the study that livelihood which was based on natural resources became less attractive to the people to due to the risk involved in it and people were moving to less risk livelihood for their sustenance.

Keywords: climate change, drought, agriculture economics, disaster impact

Procedia PDF Downloads 106
536 The Significance of Picture Mining in the Fashion and Design as a New Research Method

Authors: Katsue Edo, Yu Hiroi

Abstract:

T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.

Keywords: empirical research, fashion and design, Picture Mining, qualitative research

Procedia PDF Downloads 355
535 Spanish Language Violence Corpus: An Analysis of Offensive Language in Twitter

Authors: Beatriz Botella-Gil, Patricio Martínez-Barco, Lea Canales

Abstract:

The Internet and ICT are an integral element of and omnipresent in our daily lives. Technologies have changed the way we see the world and relate to it. The number of companies in the ICT sector is increasing every year, and there has also been an increase in the work that occurs online, from sending e-mails to the way companies promote themselves. In social life, ICT’s have gained momentum. Social networks are useful for keeping in contact with family or friends that live far away. This change in how we manage our relationships using electronic devices and social media has been experienced differently depending on the age of the person. According to currently available data, people are increasingly connected to social media and other forms of online communication. Therefore, it is no surprise that violent content has also made its way to digital media. One of the important reasons for this is the anonymity provided by social media, which causes a sense of impunity in the victim. Moreover, it is not uncommon to find derogatory comments, attacking a person’s physical appearance, hobbies, or beliefs. This is why it is necessary to develop artificial intelligence tools that allow us to keep track of violent comments that relate to violent events so that this type of violent online behavior can be deterred. The objective of our research is to create a guide for detecting and recording violent messages. Our annotation guide begins with a study on the problem of violent messages. First, we consider the characteristics that a message should contain for it to be categorized as violent. Second, the possibility of establishing different levels of aggressiveness. To download the corpus, we chose the social network Twitter for its ease of obtaining free messages. We chose two recent, highly visible violent cases that occurred in Spain. Both of them experienced a high degree of social media coverage and user comments. Our corpus has a total of 633 messages, manually tagged, according to the characteristics we considered important, such as, for example, the verbs used, the presence of exclamations or insults, and the presence of negations. We consider it necessary to create wordlists that are present in violent messages as indicators of violence, such as lists of negative verbs, insults, negative phrases. As a final step, we will use automatic learning systems to check the data obtained and the effectiveness of our guide.

Keywords: human language technologies, language modelling, offensive language detection, violent online content

Procedia PDF Downloads 116
534 Possibilities to Evaluate the Climatic and Meteorological Potential for Viticulture in Poland: The Case Study of the Jagiellonian University Vineyard

Authors: Oskar Sekowski

Abstract:

Current global warming causes changes in the traditional zones of viticulture worldwide. During 20th century, the average global air temperature increased by 0.89˚C. The models of climate change indicate that viticulture, currently concentrating in narrow geographic niches, may move towards the poles, to higher geographic latitudes. Global warming may cause changes in traditional viticulture regions. Therefore, there is a need to estimate the climatic conditions and climate change in areas that are not traditionally associated with viticulture, e.g., Poland. The primary objective of this paper is to prepare methodology to evaluate the climatic and meteorological potential for viticulture in Poland based on a case study. Moreover, the additional aim is to evaluate the climatic potential of a mesoregion where a university vineyard is located. The daily data of temperature, precipitation, insolation, and wind speed (1988-2018) from the meteorological station located in Łazy, southern Poland, was used to evaluate 15 climatological parameters and indices connected with viticulture. The next steps of the methodology are based on Geographic Information System methods. The topographical factors such as a slope gradient and slope exposure were created using Digital Elevation Models. The spatial distribution of climatological elements was interpolated by ordinary kriging. The values of each factor and indices were also ranked and classified. The viticultural potential was determined by integrating two suitability maps, i.e., the topographical and climatic ones, and by calculating the average for each pixel. Data analysis shows significant changes in heat accumulation indices that are driven by increases in maximum temperature, mostly increasing number of days with Tmax > 30˚C. The climatic conditions of this mesoregion are sufficient for vitis vinifera viticulture. The values of indicators and insolation are similar to those in the known wine regions located on similar geographical latitudes in Europe. The smallest threat to viticulture in study area is the occurrence of hail and the highest occurrence of frost in the winter. This research provides the basis for evaluating general suitability and climatologic potential for viticulture in Poland. To characterize the climatic potential for viticulture, it is necessary to assess the suitability of all climatological and topographical factors that can influence viticulture. The methodology used in this case study shows places where there is a possibility to create vineyards. It may also be helpful for wine-makers to select grape varieties.

Keywords: climatologic potential, climatic classification, Poland, viticulture

Procedia PDF Downloads 92
533 Analyses of Defects in Flexible Silicon Photovoltaic Modules via Thermal Imaging and Electroluminescence

Authors: S. Maleczek, K. Drabczyk, L. Bogdan, A. Iwan

Abstract:

It is known that for industrial applications using solar panel constructed from silicon solar cells require high-efficiency performance. One of the main problems in solar panels is different mechanical and structural defects, causing the decrease of generated power. To analyse defects in solar cells, various techniques are used. However, the thermal imaging is fast and simple method for locating defects. The main goal of this work was to analyze defects in constructed flexible silicon photovoltaic modules via thermal imaging and electroluminescence method. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management. Thermal behavior was observed using thermographic camera (VIGOcam v50, VIGO System S.A, Poland) using a DC conventional source. Electroluminescence was observed by Steinbeis Center Photovoltaics (Stuttgart, Germany) equipped with a camera, in which there is a Si-CCD, 16 Mpix detector Kodak KAF-16803type. The camera has a typical spectral response in the range 350 - 1100 nm with a maximum QE of 60 % at 550 nm. In our work commercial silicon solar cells with the size 156 × 156 mm were cut for nine parts (called single solar cells) and used to create photovoltaic modules with the size of 160 × 70 cm (containing about 80 single solar cells). Flexible silicon photovoltaic modules on polyamides or polyester fabric were constructed and investigated taking into consideration anomalies on the surface of modules. Thermal imaging provided evidence of visible voltage-activated conduction. In electro-luminescence images, two regions are noticeable: darker, where solar cell is inactive and brighter corresponding with correctly working photovoltaic cells. The electroluminescence method is non-destructive and gives greater resolution of images thereby allowing a more precise evaluation of microcracks of solar cell after lamination process. Our study showed good correlations between defects observed by thermal imaging and electroluminescence. Finally, we can conclude that the thermographic examination of large scale photovoltaic modules allows us the fast, simple and inexpensive localization of defects at the single solar cells and modules. Moreover, thermographic camera was also useful to detection electrical interconnection between single solar cells.

Keywords: electro-luminescence, flexible devices, silicon solar cells, thermal imaging

Procedia PDF Downloads 306
532 Part Variation Simulations: An Industrial Case Study with an Experimental Validation

Authors: Narendra Akhadkar, Silvestre Cano, Christophe Gourru

Abstract:

Injection-molded parts are widely used in power system protection products. One of the biggest challenges in an injection molding process is shrinkage and warpage of the molded parts. All these geometrical variations may have an adverse effect on the quality of the product, functionality, cost, and time-to-market. The situation becomes more challenging in the case of intricate shapes and in mass production using multi-cavity tools. To control the effects of shrinkage and warpage, it is very important to correctly find out the input parameters that could affect the product performance. With the advances in the computer-aided engineering (CAE), different tools are available to simulate the injection molding process. For our case study, we used the MoldFlow insight tool. Our aim is to predict the spread of the functional dimensions and geometrical variations on the part due to variations in the input parameters such as material viscosity, packing pressure, mold temperature, melt temperature, and injection speed. The input parameters may vary during batch production or due to variations in the machine process settings. To perform the accurate product assembly variation simulation, the first step is to perform an individual part variation simulation to render realistic tolerance ranges. In this article, we present a method to simulate part variations coming from the input parameters variation during batch production. The method is based on computer simulations and experimental validation using the full factorial design of experiments (DoE). The robustness of the simulation model is verified through input parameter wise sensitivity analysis study performed using simulations and experiments; all the results show a very good correlation in the material flow direction. There exists a non-linear interaction between material and the input process variables. It is observed that the parameters such as packing pressure, material, and mold temperature play an important role in spread on functional dimensions and geometrical variations. This method will allow us in the future to develop accurate/realistic virtual prototypes based on trusted simulated process variation and, therefore, increase the product quality and potentially decrease the time to market.

Keywords: correlation, molding process, tolerance, sensitivity analysis, variation simulation

Procedia PDF Downloads 168
531 Effects of Vertimax Training on Agility, Quickness and Acceleration

Authors: Dede Basturk, Metin Kaya, Halil Taskin, Nurtekin Erkmen

Abstract:

In total, 29 students studying in Selçuk University Physical Training and Sports School who are recreationally active participated voluntarilyin this study which was carried out in order to examine effects of Vertimax trainings on agility, quickness and acceleration. 3 groups took their parts in this study as Vertimax training group (N=10), Ordinary training group (N=10) and Control group (N=9). Measurements were carried out in performance laboratory of Selçuk University Physical Training and Sports School. A training program for quickness and agility was followed up for subjects 3 days a week (Monday, Wednesday, Friday) for 8 weeks. Subjects taking their parts in vertimax training group and ordinary training group participated in the training program for quickness and agility. Measurements were applied as pre-test and post-test. Subjects of vertimax training group followed the training program with vertimax device and subjects of ordinary training group followed the training program without vertimax device. As to control group who are recreationally active, they did not participate in any program. 4 gate photocells were used for measuring and measurement of distances was carried out in m. Furthermore, single gate photocell and honi were used for agility test. Measurements started with 15 minutes of warm-up. Acceleration, quickness and agility tests were applied on subjects. 3 measurements were made for each subject at 3 minutes resting intervals. The best rating of three measurements was recorded. 5 m quickness pre-test value of vertimax training groups has been determined as 1,11±0,06 s and post-test value has been determined as 1,06 ± 0,08 s (P<0,05). 5 m quickness pre-test value of ordinary training group has been determined as 1,11±0,06 s and post-test value has been determined as 1,07±0,07 s (P<0,05).5 m quickness pre-test value of control group has been determined as 1,13±0,08 s and post-test value has been determined as 1,10 ± 0,07 s (P>0,05). Upon examination of 10 m acceleration value before and after the training, 10 m acceleration pre-test value of vertimax training group has been determined as 1,82 ± 0,07 s and post-test value has been determined as 1,76±0,83 s (P>0,05). 10 m acceleration pre-test value of ordinary training group has been determined as 1,83±0,05 s and post-test value has been determined as 1,78 ± 0,08 s (P>0,05).10 m acceleration pre-test value of control group has been determined as 1,87±0,11 s and post-test value has been determined as 1,83 ± 0,09 s (P>0,05). Upon examination of 15 m acceleration value before and after the training, 15 m acceleration pre-test value of vertimax training group has been determined as 2,52±0,10 s and post-test value has been determined as 2,46 ± 0,11 s (P>0,05).15 m acceleration pre-test value of ordinary training group has been determined as 2,52±0,05 s and post-test value has been determined as 2,48 ± 0,06 s (P>0,05). 15 m acceleration pre-test value of control group has been determined as 2,55 ± 0,11 s and post-test value has been determined as 2,54 ± 0,08 s (P>0,05).Upon examination of agility performance before and after the training, agility pre-test value of vertimax training group has been determined as 9,50±0,47 s and post-test value has been determined as 9,66 ± 0,47 s (P>0,05). Agility pre-test value of ordinary training group has been determined as 9,99 ± 0,05 s and post-test value has been determined as 9,86 ± 0,40 s (P>0,05). Agility pre-test value of control group has been determined as 9,74 ± 0,45 s and post-test value has been determined as 9,92 ± 0,49 s (P>0,05). Consequently, it has been observed that quickness and acceleration features were developed significantly following 8 weeks of vertimax training program and agility features were not developed significantly. It is suggested that training practices used for the study may be used for situations which may require sudden moves and in order to attain the maximum speed in a short time. Nevertheless, it is also suggested that this training practice does not make contribution in development of moves which may require sudden direction changes. It is suggested that productiveness and innovation may come off in terms of training by using various practices of vertimax trainings.

Keywords: vertimax, training, quickness, agility, acceleration

Procedia PDF Downloads 480
530 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 109
529 The Importance of Anthropometric Indices for Assessing the Physical Development and Physical Fitness of Young Athletes

Authors: Akbarova Gulnozakhon

Abstract:

Relevance. Physical exercises can prolong the function of the growth zones of long tubular bones, delay the fusion of the epiphyses and diaphyses of bones and, thus, increase the growth of the body. At the same time, intensive strength exercises can accelerate the process of ossification of bone growth zones and slow down their growth in length. The influence of physical exercises on the process of biological maturation is noted. Gymnastics, which requires intense speed and strength loads, delays puberty. On the other hand, it is indicated that the relatively slow puberty of gymnasts is associated with the selection of girls with a special somatotype in this sport. It was found that the later onset of menstruation in female athletes does not have a negative effect on the maturation process and fertility (the ability to procreate). Observations are made about the normalizing influence of sports on the puberty of girls. The purpose of the study. Our goal is to study physical activity of varying intensity on the formation of secondary sexual characteristics and hormonal status of girls in adolescence. Each biological process peculiar to a given organism is not in a stationary state, but fluctuates with a certain frequency. According to the duration, there are, for example, circadian cycles, and infradian cycles, a typical example of which is the menstrual cycle. Materials and methods, results. Violations of menstrual function in athletes were detected by applying a questionnaire survey that contains several paragraphs and sub-paragraphs where passport data, anthropometric indicators, taking into account anthropometric indices, information about the menstrual cycle are indicated. Of 135 female athletes aged 1-3 to 16 years engaged in various sports - gymnasts, menstrual function disorders were noted in 86.7% (primary or secondary amenorrhea, irregular MC), in swimming-in 57.1%. The general condition also changes during the menstrual cycle. In a large percentage of cases, athletes indicate an increase in irritability in the premenstrual (45%) and menstrual (36%) phases. During these phases, girls note an increase in fatigue of 46.5% and 58% (respectively). In girls, secondary sexual characteristics continue to form during puberty and the clearest indicator of the onset of puberty is the age of the onset of the first menstruation - menarche. Conclusions. 1. Physical exercise has a positive effect on all major systems of the body and thus promotes health.2. Along with a beneficial effect on human health, physical exercise, if the requirements of sports are not observed, can be harmful.

Keywords: girls health, anthropometric, physical development, reproductive health

Procedia PDF Downloads 97
528 Digital Transformation and Digitalization of Public Administration

Authors: Govind Kumar

Abstract:

The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.

Keywords: digital transformation, electronic governance, public administration, knowledge framework

Procedia PDF Downloads 88
527 Peptide-Based Platform for Differentiation of Antigenic Variations within Influenza Virus Subtypes (Flutype)

Authors: Henry Memczak, Marc Hovestaedt, Bernhard Ay, Sandra Saenger, Thorsten Wolff, Frank F. Bier

Abstract:

The influenza viruses cause flu epidemics every year and serious pandemics in larger time intervals. The only cost-effective protection against influenza is vaccination. Due to rapid mutation continuously new subtypes appear, what requires annual reimmunization. For a correct vaccination recommendation, the circulating influenza strains had to be detected promptly and exactly and characterized due to their antigenic properties. During the flu season 2016/17, a wrong vaccination recommendation has been given because of the great time interval between identification of the relevant influenza vaccine strains and outbreak of the flu epidemic during the following winter. Due to such recurring incidents of vaccine mismatches, there is a great need to speed up the process chain from identifying the right vaccine strains to their administration. The monitoring of subtypes as part of this process chain is carried out by national reference laboratories within the WHO Global Influenza Surveillance and Response System (GISRS). To this end, thousands of viruses from patient samples (e.g., throat smears) are isolated and analyzed each year. Currently, this analysis involves complex and time-intensive (several weeks) animal experiments to produce specific hyperimmune sera in ferrets, which are necessary for the determination of the antigen profiles of circulating virus strains. These tests also bear difficulties in standardization and reproducibility, which restricts the significance of the results. To replace this test a peptide-based assay for influenza virus subtyping from corresponding virus samples was developed. The differentiation of the viruses takes place by a set of specifically designed peptidic recognition molecules which interact differently with the different influenza virus subtypes. The differentiation of influenza subtypes is performed by pattern recognition guided by machine learning algorithms, without any animal experiments. Synthetic peptides are immobilized in multiplex format on various platforms (e.g., 96-well microtiter plate, microarray). Afterwards, the viruses are incubated and analyzed comparing different signaling mechanisms and a variety of assay conditions. Differentiation of a range of influenza subtypes, including H1N1, H3N2, H5N1, as well as fine differentiation of single strains within these subtypes is possible using the peptide-based subtyping platform. Thereby, the platform could be capable of replacing the current antigenic characterization of influenza strains using ferret hyperimmune sera.

Keywords: antigenic characterization, influenza-binding peptides, influenza subtyping, influenza surveillance

Procedia PDF Downloads 143
526 Investigating the Antibacterial Properties and Omega-3 Levels of Evening Primrose Plant Against Multi-Drug Resistant Bacteria

Authors: A. H. Taghdisi, M. Mirmohammadi, S. Kamali

Abstract:

Evening primrose (Oenothera biennis L.) is a biennial and herbaceous and one of the most important species of medicinal plants in the world. due to the production of unsaturated fatty acids such as linoleic acid, alpha-linolenic acid, etc. in its seeds and roots, and compounds such as kaempferol in its leaves, Evening primrose has important medicinal efficiency such as reducing premenstrual problems, acceleration of wound healing, inhibiting platelet aggregation, sedation of cardiovascular diseases, and treatment of viral infections. The sap of the plant is used to treat warts, and the plant itself is used as a charm against mental and spiritual diseases and poisonous animals. Its leaves have significant antibacterial activity against yellow staphylococci. It is also used in the treatment of poisoning, especially the toxication caused by the consumption of alcoholic beverages, in the treatment of arteriosclerosis and diseases caused by liver cell insufficiency. Low germination and production speed are the problems of evening primrose growth and propagation. In the present study, extracts were obtained from four components (flowers, stems, seeds, leaves) of the evening primrose plant using the Soxhlet apparatus. To measure the antibacterial properties against MDR bacteria, microbial methods, including dilution, cultivation on a plate containing nutrient agar culture medium, and disc diffusion in agar, were performed using Staphylococcus aureus and Escherichia coli bacteria on all four extracts. The maximum antibacterial activity related to the dilution method was obtained in all extracts. In the plate culture method, antibacterial activity was obtained for all extracts in the nutrient agar medium. The maximum diameter of the non-growth halo was obtained in the disc diffusion method in agar in the leaf extract. The statistical analysis of the microbial part was done by one-way ANOVA test (SPSS). By comparing the amount of omega-3 in extracts of Iranian and foreign oils available in the market and the extracts extracted from evening primrose plant samples with gas chromatography, it is shown that the stem extract had the most omega-3 (oleic acid) and compared to the extract of the mentioned oils, it had the highest amount of omega-3 overall. Also, the amount of omega-3 in the extract of Iranian oils was much higher than in the extract of foreign oils. It should be noted that the extract of foreign oils had a more complete composition of omega-3 than the extract of Iranian oils.

Keywords: antibacterial activity, MDR bacteria, evening primrose, omega-3

Procedia PDF Downloads 92
525 Adversarial Attacks and Defenses on Deep Neural Networks

Authors: Jonathan Sohn

Abstract:

Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.

Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning

Procedia PDF Downloads 186
524 Optimization Based Design of Decelerating Duct for Pumpjets

Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan

Abstract:

Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.

Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization

Procedia PDF Downloads 195