Search results for: subjective bias detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4703

Search results for: subjective bias detection

773 Identification and Isolation of E. Coli O₁₅₇:H₇ From Water and Wastewater of Shahrood and Neka Cities by PCR Technique

Authors: Aliasghar Golmohammadian, Sona Rostampour Yasouri

Abstract:

One of the most important intestinal pathogenic strains is E. coli O₁₅₇:H₇. This pathogenic bacterium is transmitted to humans through water and food. E. coli O₁₅₇:H₇ is the main cause of Hemorrhagic colitis (HC), Hemolytic Uremic Syndrome (HUS), Thrombotic Thrombocytopenic Purpura (TTP) and in some cases death. Since E. coli O₁₅₇:H₇ can be transmitted through the consumption of different foods, including vegetables, agricultural products, and fresh dairy products, this study aims to identify and isolate E. coli O₁₅₇:H₇ from wastewater by PCR technique. One hundred twenty samples of water and wastewater were collected by Falcom Sterile from Shahrood and Neka cities. The samples were checked for colony formation after appropriate centrifugation and cultivation in the specific medium of Sorbitol MacConkey Agar (SMAC) and other diagnostic media of E. coli O₁₅₇:H₇. Also, the plates were observed macroscopically and microscopically. Then, the necessary phenotypic tests were performed on the colonies, and finally, after DNA extraction, the PCR technique was performed with specific primers related to rfbE and stx2 genes. The number of 5 samples (6%) out of all the samples examined were determined positive by PCR technique with observing the bands related to the mentioned genes on the agarose gel electrophoresis. PCR is a fast and accurate method to identify the bacteria E. coli O₁₅₇:H₇. Considering that E. coli bacteria is a resistant bacteria and survives in water and food for weeks and months, the PCR technique can provide the possibility of quick detection of contaminated water. Moreover, it helps people in the community control and prevent the transfer of bacteria to healthy and underground water and agricultural and even dairy products.

Keywords: E. coli O₁₅₇:H₇, PCR, water, wastewater

Procedia PDF Downloads 58
772 Virtual Process Hazard Analysis (Pha) Of a Nuclear Power Plant (Npp) Using Failure Mode and Effects Analysis (Fmea) Technique

Authors: Lormaine Anne A. Branzuela, Elysa V. Largo, Monet Concepcion M. Detras, Neil C. Concibido

Abstract:

The electricity demand is still increasing, and currently, the Philippine government is investigating the feasibility of operating the Bataan Nuclear Power Plant (BNPP) to address the country’s energy problem. However, the lack of process safety studies on BNPP focused on the effects of hazardous substances on the integrity of the structure, equipment, and other components, have made the plant operationalization questionable to the public. The three major nuclear power plant incidents – TMI-2, Chernobyl, and Fukushima – have made many people hesitant to include nuclear energy in the energy matrix. This study focused on the safety evaluation of possible operations of a nuclear power plant installed with a Pressurized Water Reactor (PWR), which is similar to BNPP. Failure Mode and Effects Analysis (FMEA) is one of the Process Hazard Analysis (PHA) techniques used for the identification of equipment failure modes and minimizing its consequences. Using the FMEA technique, this study was able to recognize 116 different failure modes in total. Upon computation and ranking of the risk priority number (RPN) and criticality rating (CR), it showed that failure of the reactor coolant pump due to earthquakes is the most critical failure mode. This hazard scenario could lead to a nuclear meltdown and radioactive release, as identified by the FMEA team. Safeguards and recommended risk reduction strategies to lower the RPN and CR were identified such that the effects are minimized, the likelihood of occurrence is reduced, and failure detection is improved.

Keywords: PHA, FMEA, nuclear power plant, bataan nuclear power plant

Procedia PDF Downloads 123
771 Complex Management of Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy

Authors: Abdullah A. Al Qurashi, Hattan A. Hassani, Bader K. Alaslap

Abstract:

Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy (ARVD/C) is an uncommon, inheritable cardiac disorder characterized by the progressive substitution of cardiac myocytes by fibro-fatty tissues. This pathologic substitution predisposes patients to ventricular arrhythmias and right ventricular failure. The underlying genetic defect predominantly involves genes encoding for desmosome proteins, particularly plakophilin-2 (PKP2). These aberrations lead to impaired cell adhesion, heightening the susceptibility to fibrofatty scarring under conditions of mechanical stress. Primarily, ARVD/C affects the right ventricle, but it can also compromise the left ventricle, potentially leading to biventricular heart failure. Clinical presentations can vary, spanning from asymptomatic individuals to those experiencing palpitations, syncopal episodes, and, in severe instances, sudden cardiac death. The establishment of a diagnostic criterion specifically tailored for ARVD/C significantly aids in its accurate diagnosis. Nevertheless, the task of early diagnosis is complicated by the disease's frequently asymptomatic initial stages, and the overall rarity of ARVD/C cases reported globally. In some cases, as exemplified by the adult female patient in this report, the disease may advance to terminal stages, rendering therapies like Ventricular Tachycardia (VT) ablation ineffective. This case underlines the necessity for increased awareness and understanding of ARVD/C to aid in its early detection and management. Through such efforts, we aim to decrease morbidity and mortality associated with this challenging cardiac disorder.

Keywords: arrhythmogenic right ventricular dysplasia, cardiac disease, interventional cardiology, cardiac electrophysiology

Procedia PDF Downloads 57
770 An Alternative to Problem-Based Learning in a Post-Graduate Healthcare Professional Programme

Authors: Brogan Guest, Amy Donaldson-Perrott

Abstract:

The Master’s of Physician Associate Studies (MPAS) programme at St George’s, University of London (SGUL), is an intensive two-year course that trains students to become physician associates (PAs). PAs are generalized healthcare providers who work in primary and secondary care across the UK. PA programmes face the difficult task of preparing students to become safe medical providers in two short years. Our goal is to teach students to develop clinical reasoning early on in their studies and historically, this has been done predominantly though problem-based learning (PBL). We have had an increase concern about student engagement in PBL and difficulty recruiting facilitators to maintain the low student to facilitator ratio required in PBL. To address this issue, we created ‘Clinical Application of Anatomy and Physiology (CAAP)’. These peer-led, interactive, problem-based, small group sessions were designed to facilitate students’ clinical reasoning skills. The sessions were designed using the concept of Team-Based Learning (TBL). Students were divided into small groups and each completed a pre-session quiz consisting of difficult questions devised to assess students’ application of medical knowledge. The quiz was completed in small groups and they were not permitted access of external resources. After the quiz, students worked through a series of openended, clinical tasks using all available resources. They worked at their own pace and the session was peer-led, rather than facilitator-driven. For a group of 35 students, there were two facilitators who observed the sessions. The sessions utilised an infinite space whiteboard software. Each group member was encouraged to actively participate and work together to complete the 15-20 tasks. The session ran for 2 hours and concluded with a post-session quiz, identical to the pre-session quiz. We obtained subjective feedback from students on their experience with CAAP and evaluated the objective benefit of the sessions through the quiz results. Qualitative feedback from students was generally positive with students feeling the sessions increased engagement, clinical understanding, and confidence. They found the small group aspect beneficial and the technology easy to use and intuitive. They also liked the benefit of building a resource for their future revision, something unique to CAAP compared to PBL, which out students participate in weekly. Preliminary quiz results showed improvement from pre- and post- session; however, further statistical analysis will occur once all sessions are complete (final session to run December 2022) to determine significance. As a post-graduate healthcare professional programme, we have a strong focus on self-directed learning. Whilst PBL has been a mainstay in our curriculum since its inception, there are limitations and concerns about its future in view of student engagement and facilitator availability. Whilst CAAP is not TBL, it draws on the benefits of peer-led, small group work with pre- and post- team-based quizzes. The pilot of these sessions has shown that students are engaged by CAAP, and they can make significant progress in clinical reasoning in a short amount of time. This can be achieved with a high student to facilitator ratio.

Keywords: problem based learning, team based learning, active learning, peer-to-peer teaching, engagement

Procedia PDF Downloads 77
769 Clinical Signs of Neonatal Calves in Experimental Colisepticemia

Authors: Samad Lotfollahzadeh

Abstract:

Escherichia coli (E.coli) is the most isolated bacteria from blood circulation of septicemic calves. Given the prevalence of septicemia in animals and its economic importance in veterinary practice, better understanding of changes in clinical signs following disease, may contribute to early detection of the disorder. The present study has been carried out to detect changes of clinical signs in induced sepsis in calves with E.coli. Colisepticemia has been induced in 10 twenty-day old healthy Holstein- Frisian calves with intravenous injection of 1.5 X 109 colony forming units (cfu) of O111: H8 strain of E.coli. Clinical signs including rectal temperature, heart rate, respiratory rate, shock, appetite, sucking reflex, feces consistency, general behavior, dehydration and standing ability were recorded in experimental calves during 24 hours after induction of colisepticemia. Blood culture was also carried out from calves four times during the experiment. ANOVA with repeated measure is used to see changes of calves’ clinical signs to experimental colisepticemia, and values of P≤ 0.05 was considered statistically significant. Mean values of rectal temperature and heart rate as well as median values of respiratory rate, appetite, suckling reflex, standing ability and feces consistency of experimental calves increased significantly during the study (P<0.05). In the present study, median value of shock score was not significantly increased in experimental calves (P> 0.05). The results of present study showed that total score of clinical signs in calves with experimental colisepticemia increased significantly, although the score of some clinical signs such as shock did not change significantly.

Keywords: calves, clinical signs scoring, E. coli O111:H8, experimental colisepticemia

Procedia PDF Downloads 370
768 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 334
767 Calculation of the Normalized Difference Vegetation Index and the Spectral Signature of Coffee Crops: Benefits of Image Filtering on Mixed Crops

Authors: Catalina Albornoz, Giacomo Barbieri

Abstract:

Crop monitoring has shown to reduce vulnerability to spreading plagues and pathologies in crops. Remote sensing with Unmanned Aerial Vehicles (UAVs) has made crop monitoring more precise, cost-efficient and accessible. Nowadays, remote monitoring involves calculating maps of vegetation indices by using different software that takes either Truecolor (RGB) or multispectral images as an input. These maps are then used to segment the crop into management zones. Finally, knowing the spectral signature of a crop (the reflected radiation as a function of wavelength) can be used as an input for decision-making and crop characterization. The calculation of vegetation indices using software such as Pix4D has high precision for monoculture plantations. However, this paper shows that using this software on mixed crops may lead to errors resulting in an incorrect segmentation of the field. Within this work, authors propose to filter all the elements different from the main crop before the calculation of vegetation indices and the spectral signature. A filter based on the Sobel method for border detection is used for filtering a coffee crop. Results show that segmentation into management zones changes with respect to the traditional situation in which a filter is not applied. In particular, it is shown how the values of the spectral signature change in up to 17% per spectral band. Future work will quantify the benefits of filtering through the comparison between in situ measurements and the calculated vegetation indices obtained through remote sensing.

Keywords: coffee, filtering, mixed crop, precision agriculture, remote sensing, spectral signature

Procedia PDF Downloads 381
766 Preparation of Frozen Bivalent Babesial (Babesia Bovis and Babesia Bigemina) Vaccine from Field Isolates and Evaluation of Its Efficacy in Calves

Authors: Muhammad Fiaz Qamar, Ahmad Faraz, Muhammad Arfan Zaman, Kazim Ali, Waleed Akram

Abstract:

Babesiosis is reflected as the most important disease of cattle that are transmitted by arthropods. In Pakistan, its prevalence is up to 29% in the cattle and buffalo population in different regions. Cattle show a long lasting and durable immunity by giving an infection of B.bovis, B. bigemina, or Babesiadivergens. this is used in cattle to immunize them in a few countries as anti-babesiosis vaccine. Development of frozen vaccine allows for complete testing after production of each batch, However, once thawed, its reduced its shelf life, frozen vaccines are more difficult to transport as well as expensive to produce as compared to chilled vaccine. The contamination of blood derived vaccine has the potential risk that makes pre-production and post-production quality control necessary. For the trail master seed production of whole blood frozen bivalent Babesia(Babesiabovis and Babesiabigemina), 100 blood samples of Babesial positive suspected cattle was taken and processed for separation microscopic detection and rectification by PCR. Vaccine passages were done to reduce the parasitaemiasis in live calves. After 8 passages, parasitemia of Babesia reduced from 80% to 15%. Infected donor calf’s blood was taken by jugular cannulation by using preservative free lithium heparin as an anticoagulant (5 International Units IU heparin/ml blood). In lab, parasite containing blood was mixed in equal volumes with 3 M glycerol in PBS supplemented with 5 mM glucose (final concentration of glycerol 1.5 M) at 37°C. The mixture was then equilibrized at 37°C for 30 minutes and were dispensed in required containers (e.g., 5 ml cryovials).

Keywords: distribution, babesia, primer sequences, PCV

Procedia PDF Downloads 97
765 Radio-Frequency Technologies for Sensing and Imaging

Authors: Cam Nguyen

Abstract:

Rapid, accurate, and safe sensing and imaging of physical quantities or structures finds many applications and is of significant interest to society. Sensing and imaging using radio-frequency (RF) techniques, particularly, has gone through significant development and subsequently established itself as a unique territory in the sensing world. RF sensing and imaging has played a critical role in providing us many sensing and imaging abilities beyond our human capabilities, benefiting both civilian and military applications - for example, from sensing abnormal conditions underneath some structures’ surfaces to detection and classification of concealed items, hidden activities, and buried objects. We present the developments of several sensing and imaging systems implementing RF technologies like ultra-wide band (UWB), synthetic-pulse, and interferometry. These systems are fabricated completely using RF integrated circuits. The UWB impulse system operates over multiple pulse durations from 450 to 1170 ps with 5.5-GHz RF bandwidth. It performs well through tests of various samples, demonstrating its usefulness for subsurface sensing. The synthetic-pulse system operating from 0.6 to 5.6 GHz can assess accurately subsurface structures. The synthetic-pulse system operating from 29.72-37.7 GHz demonstrates abilities for various surface and near-surface sensing such as profile mapping, liquid-level monitoring, and anti-personnel mine locating. The interferometric system operating at 35.6 GHz demonstrates its multi-functional capability for measurement of displacements and slow velocities. These RF sensors are attractive and useful for various surface and subsurface sensing applications. This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Keywords: RF sensors, radars, surface sensing, subsurface sensing

Procedia PDF Downloads 308
764 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 43
763 The Economic Burden of Breast Cancer on Women in Nigeria: Implication for Socio-Economic Development

Authors: Tolulope Allo, Mofoluwake P. Ajayi, Adenike E. Idowu, Emmanuel O. Amoo, Fadeke Esther Olu-Owolabi

Abstract:

Breast cancer which was more prevalent in Europe and America in the past is gradually being mirrored across the world today with greater economic burden on low and middle income countries (LMCs). Breast cancer is the most common cancer among women globally and current studies have shown that a woman dies with the diagnosis of breast cancer every thirteen minutes. The economic cost of breast cancer is overwhelming particularly for developing economies. While it causes billion of dollar in losses of national income, it pushes millions of people below poverty line. This study examined the economic burden of breast cancer on Nigerian women, its impacts on their standard of living and its effects on Nigeria’s socio economic development. The study adopts a qualitative research approach using the in-depth interview technique to elicit valuable information from respondents with cancer experience from the Southern part of Nigeria. Respondents constituted women in their reproductive age (15-49 years) that have experienced and survived cancer and also those that are currently receiving treatment. Excerpts from the interviews revealed that the cost of treatment is one of the major factors contributing to the late presentation of breast cancer incidences among women as many of them could not afford to pay for their own treatment. The study also revealed that many women prefer to explore other options such as herbal treatments and spiritual consultations which is less expensive and affordable. The study therefore concludes that breast cancer diagnosis and treatment should be subsidized by the government in order to facilitate easy access and affordability thereby promoting early detection and reducing the economic burden of treatment on women.

Keywords: breast cancer, development, economic burden, women

Procedia PDF Downloads 353
762 A Bayesian Parameter Identification Method for Thermorheological Complex Materials

Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider

Abstract:

Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.

Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex

Procedia PDF Downloads 259
761 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization

Procedia PDF Downloads 366
760 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice

Authors: Diana Reckien

Abstract:

Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.

Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity

Procedia PDF Downloads 388
759 Design of a Portable Shielding System for a Newly Installed NaI(Tl) Detector

Authors: Mayesha Tahsin, A.S. Mollah

Abstract:

Recently, a 1.5x1.5 inch NaI(Tl) detector based gamma-ray spectroscopy system has been installed in the laboratory of the Nuclear Science and Engineering Department of the Military Institute of Science and Technology for radioactivity detection purposes. The newly installed NaI(Tl) detector has a circular lead shield of 22 mm width. An important consideration of any gamma-ray spectroscopy is the minimization of natural background radiation not originating from the radioactive sample that is being measured. Natural background gamma-ray radiation comes from naturally occurring or man-made radionuclides in the environment or from cosmic sources. Moreover, the main problem with this system is that it is not suitable for measurements of radioactivity with a large sample container like Petridish or Marinelli beaker geometry. When any laboratory installs a new detector or/and new shield, it “must” first carry out quality and performance tests for the detector and shield. This paper describes a new portable shielding system with lead that can reduce the background radiation. Intensity of gamma radiation after passing the shielding will be calculated using shielding equation I=Ioe-µx where Io is initial intensity of the gamma source, I is intensity after passing through the shield, µ is linear attenuation coefficient of the shielding material, and x is the thickness of the shielding material. The height and width of the shielding will be selected in order to accommodate the large sample container. The detector will be surrounded by a 4π-geometry low activity lead shield. An additional 1.5 mm thick shield of tin and 1 mm thick shield of copper covering the inner part of the lead shielding will be added in order to remove the presence of characteristic X-rays from the lead shield.

Keywords: shield, NaI (Tl) detector, gamma radiation, intensity, linear attenuation coefficient

Procedia PDF Downloads 147
758 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 151
757 SEM Detection of Folate Receptor in a Murine Breast Cancer Model Using Secondary Antibody-Conjugated, Gold-Coated Magnetite Nanoparticles

Authors: Yasser A. Ahmed, Juleen M Dickson, Evan S. Krystofiak, Julie A. Oliver

Abstract:

Cancer cells urgently need folate to support their rapid division. Folate receptors (FR) are over-expressed on a wide range of tumor cells, including breast cancer cells. FR are distributed over the entire surface of cancer cells, but are polarized to the apical surface of normal cells. Targeting of cancer cells using specific surface molecules such as folate receptors may be one of the strategies used to kill cancer cells without hurting the neighing normal cells. The aim of the current study was to try a method of SEM detecting FR in a murine breast cancer cell model (4T1 cells) using secondary antibody conjugated to gold or gold-coated magnetite nanoparticles. 4T1 cells were suspended in RPMI medium witth FR antibody and incubated with secondary antibody for fluorescence microscopy. The cells were cultured on 30mm Thermanox coverslips for 18 hours, labeled with FR antibody then incubated with secondary antibody conjugated to gold or gold-coated magnetite nanoparticles and processed to scanning electron microscopy (SEM) analysis. The fluorescence microscopy study showed strong punctate FR expression on 4T1 cell membrane. With SEM, the labeling with gold or gold-coated magnetite conjugates showed a similar pattern. Specific labeling occurred in nanoparticle clusters, which are clearly visualized in backscattered electron images. The 4T1 tumor cell model may be useful for the development of FR-targeted tumor therapy using gold-coated magnetite nano-particles.

Keywords: cancer cell, nanoparticles, cell culture, SEM

Procedia PDF Downloads 728
756 Evaluation of Firearm Injury Syndromic Surveillance in Utah

Authors: E. Bennion, A. Acharya, S. Barnes, D. Ferrell, S. Luckett-Cole, G. Mower, J. Nelson, Y. Nguyen

Abstract:

Objective: This study aimed to evaluate the validity of a firearm injury query in the Early Notification of Community-based Epidemics syndromic surveillance system. Syndromic surveillance data are used at the Utah Department of Health for early detection of and rapid response to unusually high rates of violence and injury, among other health outcomes. The query of interest was defined by the Centers for Disease Control and Prevention and used chief complaint and discharge diagnosis codes to capture initial emergency department encounters for firearm injury of all intents. Design: Two epidemiologists manually reviewed electronic health records of emergency department visits captured by the query from April-May 2020, compared results, and sent conflicting determinations to two arbiters. Results: Of the 85 unique records captured, 67 were deemed probable, 19 were ruled out, and two were undetermined, resulting in a positive predictive value of 75.3%. Common reasons for false positives included non-initial encounters and misleading keywords. Conclusion: Improving the validity of syndromic surveillance data would better inform outbreak response decisions made by state and local health departments. The firearm injury definition could be refined to exclude non-initial encounters by negating words such as “last month,” “last week,” and “aftercare”; and to exclude non-firearm injury by negating words such as “pellet gun,” “air gun,” “nail gun,” “bullet bike,” and “exit wound” when a firearm is not mentioned.

Keywords: evaluation, health information system, firearm injury, syndromic surveillance

Procedia PDF Downloads 164
755 Cultural Cognition and Voting: Understanding Values and Perceived Risks in the Colombian Population

Authors: Andrea N. Alarcon, Julian D. Castro, Gloria C. Rojas, Paola A. Vaca, Santiago Ortiz, Gustavo Martinez, Pablo D. Lemoine

Abstract:

Recently, electoral results across many countries have shown to be inconsistent with rational decision theory, which states that individuals make decisions based on maximizing benefits and reducing risks. An alternative explanation has emerged: Fear and rage-driven vote have been proved to be highly effective for political persuasion and mobilization. This phenomenon has been evident in the 2016 elections in the United States, 2006 elections in Mexico, 1998 elections in Venezuela, and 2004 elections in Bolivia. In Colombia, it has occurred recently in the 2016 plebiscite for peace and 2018 presidential elections. The aim of this study is to explain this phenomenon using cultural cognition theory, referring to the psychological predisposition individuals have to believe that its own and its peer´s behavior is correct and, therefore, beneficial to the entire society. Cultural cognition refers to the tendency of individuals to fit perceived risks, and factual beliefs into group shared values; the Cultural Cognition Worldview Scales (CCWS) measures cultural perceptions through two different dimensions: Individualism-communitarianism and hierarchy-egalitarianism. The former refers to attitudes towards social dominance based on conspicuous and static characteristics (sex, ethnicity or social class), while the latter refers to attitudes towards a social ordering in which it is expected from individuals to guarantee their own wellbeing without society´s or government´s intervention. A probabilistic national sample was obtained from different polls from the consulting and public opinion company Centro Nacional de Consultoría. Sociodemographic data was obtained along with CCWS scores, a subjective measure of left-right ideological placement and vote intention for 2019 Mayor´s elections were also included in the questionnaires. Finally, the question “In your opinion, what is the greatest risk Colombia is facing right now?” was included to identify perceived risk in the population. Preliminary results show that Colombians are highly distributed among hierarchical communitarians and egalitarian individualists (30.9% and 31.7%, respectively), and to a less extent among hierarchical individualists and egalitarian communitarians (19% and 18.4%, respectively). Males tended to be more hierarchical (p < .000) and communitarian (p=.009) than females. ANOVA´s revealed statistically significant differences between groups (quadrants) for the level of schooling, left-right ideological orientation, and stratum (p < .000 for all), and proportion differences revealed statistically significant differences for groups of age (p < .001). Differences and distributions for vote intention and perceived risks are still being processed and results are yet to be analyzed. Results show that Colombians are differentially distributed among quadrants in regard to sociodemographic data and left-right ideological orientation. These preliminary results indicate that this study may shed some light on why Colombians vote the way they do, and future qualitative data will show the fears emerging from the identified values in the CCWS and the relation this has with vote intention.

Keywords: communitarianism, cultural cognition, egalitarianism, hierarchy, individualism, perceived risks

Procedia PDF Downloads 141
754 The Analysis of Noise Harmfulness in Public Utility Facilities

Authors: Monika Sobolewska, Aleksandra Majchrzak, Bartlomiej Chojnacki, Katarzyna Baruch, Adam Pilch

Abstract:

The main purpose of the study is to perform the measurement and analysis of noise harmfulness in public utility facilities. The World Health Organization reports that the number of people suffering from hearing impairment is constantly increasing. The most alarming is the number of young people occurring in the statistics. The majority of scientific research in the field of hearing protection and noise prevention concern industrial and road traffic noise as the source of health problems. As the result, corresponding standards and regulations defining noise level limits are enforced. However, there is another field uncovered by profound research – leisure time. Public utility facilities such as clubs, shopping malls, sport facilities or concert halls – they all generate high-level noise, being out of proper juridical control. Among European Union Member States, the highest legislative act concerning noise prevention is the Environmental Noise Directive 2002/49/EC. However, it omits the problem discussed above and even for traffic, railway and aircraft noise it does not set limits or target values, leaving these issues to the discretion of the Member State authorities. Without explicit and uniform regulations, noise level control at places designed for relaxation and entertainment is often in the responsibility of people having little knowledge of hearing protection, unaware of the risk the noise pollution poses. Exposure to high sound levels in clubs, cinemas, at concerts and sports events may result in a progressive hearing loss, especially among young people, being the main target group of such facilities and events. The first step to change this situation and to raise the general awareness is to perform reliable measurements the results of which will emphasize the significance of the problem. This project presents the results of more than hundred measurements, performed in most types of public utility facilities in Poland. As the most suitable measuring instrument for such a research, personal noise dosimeters were used to collect the data. Each measurement is presented in the form of numerical results including equivalent and peak sound pressure levels and a detailed description considering the type of the sound source, size and furnishing of the room and the subjective sound level evaluation. In the absence of a straight reference point for the interpretation of the data, the limits specified in EU Directive 2003/10/EC were used for comparison. They set the maximum sound level values for workers in relation to their working time length. The analysis of the examined problem leads to the conclusion that during leisure time, people are exposed to noise levels significantly exceeding safe values. As the hearing problems are gradually progressing, most people underplay the problem, ignoring the first symptoms. Therefore, an effort has to be made to specify the noise regulations for public utility facilities. Without any action, in the foreseeable future the majority of Europeans will be dealing with serious hearing damage, which will have a negative impact on the whole societies.

Keywords: hearing protection, noise level limits, noise prevention, noise regulations, public utility facilities

Procedia PDF Downloads 219
753 Fatigue Crack Growth Rate Measurement by Means of Classic Method and Acoustic Emission

Authors: V. Mentl, V. Koula, P. Mazal, J. Volák

Abstract:

Nowadays, the acoustic emission is a widely recognized method of material damage investigation, mainly in cases of cracks initiation and growth observation and evaluation. This is highly important in structures, e.g. pressure vessels, large steam turbine rotors etc., applied both in classic and nuclear power plants. Nevertheless, the acoustic emission signals must be correlated with the real crack progress to be able to evaluate the cracks and their growth by this non-destructive technique alone in real situations and to reach reliable results when the assessment of the structures' safety and reliability is performed and also when the remaining lifetime should be evaluated. The main aim of this study was to propose a methodology for evaluation of the early manifestations of the fatigue cracks and their growth and thus to quantify the material damage by acoustic emission parameters. Specimens made of several steels used in the power producing industry were subjected to fatigue loading in the low- and high-cycle regimes. This study presents results of the crack growth rate measurement obtained by the classic compliance change method and the acoustic emission signal analysis. The experiments were realized in cooperation between laboratories of Brno University of Technology and West Bohemia University in Pilsen within the solution of the project of the Czech Ministry of Industry and Commerce: "A diagnostic complex for the detection of pressure media and material defects in pressure components of nuclear and classic power plants" and the project “New Technologies for Mechanical Engineering”.

Keywords: fatigue, crack growth rate, acoustic emission, material damage

Procedia PDF Downloads 368
752 Urban Land Use Type Analysis Based on Land Subsidence Areas Using X-Band Satellite Image of Jakarta Metropolitan City, Indonesia

Authors: Ratih Fitria Putri, Josaphat Tetuko Sri Sumantyo, Hiroaki Kuze

Abstract:

Jakarta Metropolitan City is located on the northwest coast of West Java province with geographical location between 106º33’ 00”-107º00’00”E longitude and 5º48’30”-6º24’00”S latitude. Jakarta urban area has been suffered from land subsidence in several land use type as trading, industry and settlement area. Land subsidence hazard is one of the consequences of urban development in Jakarta. This hazard is caused by intensive human activities in groundwater extraction and land use mismanagement. Geologically, the Jakarta urban area is mostly dominated by alluvium fan sediment. The objectives of this research are to make an analysis of Jakarta urban land use type on land subsidence zone areas. The process of producing safer land use and settlements of the land subsidence areas are very important. Spatial distributions of land subsidence detection are necessary tool for land use management planning. For this purpose, Differential Synthetic Aperture Radar Interferometry (DInSAR) method is used. The DInSAR is complementary to ground-based methods such as leveling and global positioning system (GPS) measurements, yielding information in a wide coverage area even when the area is inaccessible. The data were fine tuned by using X-Band image satellite data from 2010 to 2013 and land use mapping data. Our analysis of land use type that land subsidence movement occurred on the northern part Jakarta Metropolitan City varying from 7.5 to 17.5 cm/year as industry and settlement land use type areas.

Keywords: land use analysis, land subsidence mapping, urban area, X-band satellite image

Procedia PDF Downloads 270
751 Assessment of Airtightness Through a Standardized Procedure in a Nearly-Zero Energy Demand House

Authors: Mar Cañada Soriano, Rafael Royo-Pastor, Carolina Aparicio-Fernández, Jose-Luis Vivancos

Abstract:

The lack of insulation, along with the existence of air leakages, constitute a meaningful impact on the energy performance of buildings. Both of them lead to increases in the energy demand through additional heating and/or cooling loads. Additionally, they cause thermal discomfort. In order to quantify these uncontrolled air currents, pressurization and depressurization tests can be performed. Among them, the Blower Door test is a standardized procedure to determine the airtightness of a space which characterizes the rate of air leakages through the envelope surface, calculating to this purpose an air flow rate indicator. In this sense, the low-energy buildings complying with the Passive House design criteria are required to achieve high levels of airtightness. Due to the invisible nature of air leakages, additional tools are often considered to identify where the infiltrations take place. Among them, the infrared thermography entails a valuable technique to this purpose since it enables their detection. The aim of this study is to assess the airtightness of a typical Mediterranean dwelling house located in the Valencian orchad (Spain) restored under the Passive House standard using to this purpose the blower-door test. Moreover, the building energy performance modelling tools TRNSYS (TRaNsient System Simulation program) and TRNFlow (TRaNsient Flow) have been used to determine its energy performance, and the infiltrations’ identification was carried out by means of infrared thermography. The low levels of infiltrations obtained suggest that this house may comply with the Passive House standard.

Keywords: airtightness, blower door, trnflow, infrared thermography

Procedia PDF Downloads 118
750 16s rRNA Based Metagenomic Analysis of Palm Sap Samples From Bangladesh

Authors: Ágota Ábrahám, Md Nurul Islam, Karimane Zeghbib, Gábor Kemenesi, Sazeda Akter

Abstract:

Collecting palm sap as a food source is an everyday practice in some parts of the world. However, the consumption of palm juice has been associated with regular infections and epidemics in parts of Bangladesh. This is attributed to fruit-eating bats and other vertebrates or invertebrates native to the area, contaminating the food with their body secretions during the collection process. The frequent intake of palm juice, whether as a processed food product or in its unprocessed form, is a common phenomenon in large areas. The range of pathogens suitable for human infection resulting from this practice is not yet fully understood. Additionally, the high sugar content of the liquid makes it an ideal culture medium for certain bacteria, which can easily propagate and potentially harm consumers. Rapid diagnostics, especially in remote locations, could mitigate health risks associated with palm juice consumption. The primary objective of this research is the rapid genomic detection and risk assessment of bacteria that may cause infections in humans through the consumption of palm juice. Utilizing state-of-the-art third-generation Nanopore metagenomic sequencing technology based on 16S rRNA, and identified bacteria primarily involved in fermenting processes. The swift metagenomic analysis, coupled with the widespread availability and portability of Nanopore products (including real-time analysis options), proves advantageous for detecting harmful pathogens in food sources without relying on extensive industry resources and testing.

Keywords: raw date palm sap, NGS, metabarcoding, food safety

Procedia PDF Downloads 46
749 Understanding Beginning Writers' Narrative Writing with a Multidimensional Assessment Approach

Authors: Huijing Wen, Daibao Guo

Abstract:

Writing is thought to be the most complex facet of language arts. Assessing writing is difficult and subjective, and there are few scientifically validated assessments exist. Research has proposed evaluating writing using a multidimensional approach, including both qualitative and quantitative measures of handwriting, spelling and prose. Given that narrative writing has historically been a staple of literacy instruction in primary grades and is one of the three major genres Common Core State Standards required students to acquire starting in kindergarten, it is essential for teachers to understand how to measure beginning writers writing development and sources of writing difficulties through narrative writing. Guided by the theoretical models of early written expression and using empirical data, this study examines ways teachers can enact a comprehensive approach to understanding beginning writer’s narrative writing through three writing rubrics developed for a Curriculum-based Measurement (CBM). The goal is to help classroom teachers structure a framework for assessing early writing in primary classrooms. Participants in this study included 380 first-grade students from 50 classrooms in 13 schools in three school districts in a Mid-Atlantic state. Three writing tests were used to assess first graders’ writing skills in relation to both transcription (i.e., handwriting fluency and spelling tests) and translational skills (i.e., a narrative prompt). First graders were asked to respond to a narrative prompt in 20 minutes. Grounded in theoretical models of earlier expression and empirical evidence of key contributors to early writing, all written samples to the narrative prompt were coded three ways for different dimensions of writing: length, quality, and genre elements. To measure the quality of the narrative writing, a traditional holistic rating rubric was developed by the researchers based on the CCSS and the general traits of good writing. Students' genre knowledge was measured by using a separate analytic rubric for narrative writing. Findings showed that first-graders had emerging and limited transcriptional and translational skills with a nascent knowledge of genre conventions. The findings of the study provided support for the Not-So-Simple View of Writing in that fluent written expression, measured by length and other important linguistic resources measured by the overall quality and genre knowledge rubrics, are fundamental in early writing development. Our study echoed previous research findings on children's narrative development. The study has practical classroom application as it informs writing instruction and assessment. It offered practical guidelines for classroom instruction by providing teachers with a better understanding of first graders' narrative writing skills and knowledge of genre conventions. Understanding students’ narrative writing provides teachers with more insights into specific strategies students might use during writing and their understanding of good narrative writing. Additionally, it is important for teachers to differentiate writing instruction given the individual differences shown by our multiple writing measures. Overall, the study shed light on beginning writers’ narrative writing, indicating the complexity of early writing development.

Keywords: writing assessment, early writing, beginning writers, transcriptional skills, translational skills, primary grades, simple view of writing, writing rubrics, curriculum-based measurement

Procedia PDF Downloads 69
748 Nanoscale Photo-Orientation of Azo-Dyes in Glassy Environments Using Polarized Optical Near-Field

Authors: S. S. Kharintsev, E. A. Chernykh, S. K. Saikin, A. I. Fishman, S. G. Kazarian

Abstract:

Recent advances in improving information storage performance are inseparably linked with circumvention of fundamental constraints such as the supermagnetic limit in heat assisted magnetic recording, charge loss tolerance in solid-state memory and the Abbe’s diffraction limit in optical storage. A substantial breakthrough in the development of nonvolatile storage devices with dimensional scaling has been achieved due to phase-change chalcogenide memory, which nowadays, meets the market needs to the greatest advantage. A further progress is aimed at the development of versatile nonvolatile high-speed memory combining potentials of random access memory and archive storage. The well-established properties of light at the nanoscale empower us to use them for recording optical information with ultrahigh density scaled down to a single molecule, which is the size of a pit. Indeed, diffraction-limited optics is able to record as much information as ~1 Gb/in2. Nonlinear optical effects, for example, two-photon fluorescence recording, allows one to decrease the extent of the pit even more, which results in the recording density up to ~100 Gb/in2. Going beyond the diffraction limit, due to the sub-wavelength confinement of light, pushes the pit size down to a single chromophore, which is, on average, of ~1 nm in length. Thus, the memory capacity can be increased up to the theoretical limit of 1 Pb/in2. Moreover, the field confinement provides faster recording and readout operations due to the enhanced light-matter interaction. This, in turn, leads to the miniaturization of optical devices and the decrease of energy supply down to ~1 μW/cm². Intrinsic features of light such as multimode, mixed polarization and angular momentum in addition to the underlying optical and holographic tools for writing/reading, enriches the storage and encryption of optical information. In particular, the finite extent of the near-field penetration, falling into a range of 50-100 nm, gives the possibility to perform 3D volume (layer-to-layer) recording/readout of optical information. In this study, we demonstrate a comprehensive evidence of isotropic-to-homeotropic phase transition of the azobenzene-functionalized polymer thin film exposed to light and dc electric field using near-field optical microscopy and scanning capacitance microscopy. We unravel a near-field Raman dichroism of a sub-10 nm thick epoxy-based side-chain azo-polymer films with polarization-controlled tip-enhanced Raman scattering. In our study, orientation of azo-chromophores is controlled with a bias voltage gold tip rather than light polarization. Isotropic in-plane and homeotropic out-of-plane arrangement of azo-chromophores in glassy environment can be distinguished with transverse and longitudinal optical near-fields. We demonstrate that both phases are unambiguously visualized by 2D mapping their local dielectric properties with scanning capacity microscopy. The stability of the polar homeotropic phase is strongly sensitive to the thickness of the thin film. We make an analysis of α-transition of the azo-polymer by detecting a temperature-dependent phase jump of an AFM cantilever when passing through the glass temperature. Overall, we anticipate further improvements in optical storage performance, which approaches to a single molecule level.

Keywords: optical memory, azo-dye, near-field, tip-enhanced Raman scattering

Procedia PDF Downloads 173
747 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 149
746 Subway Ridership Estimation at a Station-Level: Focus on the Impact of Bus Demand, Commercial Business Characteristics and Network Topology

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The primary purpose of this study is to develop a methodological framework to predict daily subway ridership at a station-level and to examine the association between subway ridership and bus demand incorporating commercial business facility in the vicinity of each subway station. The socio-economic characteristics, land-use, and built environment as factors may have an impact on subway ridership. However, it should be considered not only the endogenous relationship between bus and subway demand but also the characteristics of commercial business within a subway station’s sphere of influence, and integrated transit network topology. Regarding a statistical approach to estimate subway ridership at a station level, therefore it should be considered endogeneity and heteroscedastic issues which might have in the subway ridership prediction model. This study focused on both discovering the impacts of bus demand, commercial business characteristics, and network topology on subway ridership and developing more precise subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers entire Seoul city in South Korea and includes 243 stations with the temporal scope set at twenty-four hours with one-hour interval time panels each. The data for subway and bus ridership was collected Seoul Smart Card data from 2015 and 2016. Three-Stage Least Square(3SLS) approach was applied to develop daily subway ridership model as capturing the endogeneity and heteroscedasticity between bus and subway demand. Independent variables incorporating in the modeling process were commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. As a result, it was found that bus ridership and subway ridership were endogenous each other and they had a significantly positive sign of coefficients which means one transit mode could increase another transportation mode’s ridership. In other words, two transit modes of subway and bus have a mutual relationship instead of the competitive relationship. The commercial business characteristics are the most critical dimension among the independent variables. The variables of commercial business facility rate in the paper containing six types; medical, educational, recreational, financial, food service, and shopping. From the model result, a higher rate in medical, financial buildings, shopping, and food service facility lead to increment of subway ridership at a station, while recreational and educational facility shows lower subway ridership. The complex network theory was applied for estimating integrated network topology measures that cover the entire Seoul transit network system, and a framework for seeking an impact on subway ridership. The centrality measures were found to be significant and showed a positive sign indicating higher centrality led to more subway ridership at a station level. The results of model accuracy tests by out of samples provided that 3SLS model has less mean square error rather than OLS and showed the methodological approach for the 3SLS model was plausible to estimate more accurate subway ridership. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017R1C1B2010175).

Keywords: subway ridership, bus ridership, commercial business characteristic, endogeneity, network topology

Procedia PDF Downloads 138
745 Detection of Trends and Break Points in Climatic Indices: The Case of Umbria Region in Italy

Authors: A. Flammini, R. Morbidelli, C. Saltalippi

Abstract:

The increase of air surface temperature at global scale is a fact, with values around 0.85 ºC since the late nineteen century, as well as a significant change in main features of rainfall regime. Nevertheless, the detected climatic changes are not equally distributed all over the world, but exhibit specific characteristics in different regions. Therefore, studying the evolution of climatic indices in different geographical areas with a prefixed standard approach becomes very useful in order to analyze the existence of climatic trend and compare results. In this work, a methodology to investigate the climatic change and its effects on a wide set of climatic indices is proposed and applied at regional scale in the case study of a Mediterranean area, Umbria region in Italy. From data of the available temperature stations, nine temperature indices have been obtained and the existence of trends has been checked by applying the non-parametric Mann-Kendall test, while the non-parametric Pettitt test and the parametric Standard Normal Homogeneity Test (SNHT) have been applied to detect the presence of break points. In addition, aimed to characterize the rainfall regime, data from 11 rainfall stations have been used and a trend analysis has been performed on cumulative annual rainfall depth, daily rainfall, rainy days, and dry periods length. The results show a general increase in any temperature indices, even if with a trend pattern dependent of indices and stations, and a general decrease of cumulative annual rainfall and average daily rainfall, with a time rainfall distribution over the year different from the past.

Keywords: climatic change, temperature, rainfall regime, trend analysis

Procedia PDF Downloads 111
744 Invasion of Pectinatella magnifica in Freshwater Resources of the Czech Republic

Authors: J. Pazourek, K. Šmejkal, P. Kollár, J. Rajchard, J. Šinko, Z. Balounová, E. Vlková, H. Salmonová

Abstract:

Pectinatella magnifica (Leidy, 1851) is an invasive freshwater animal that lives in colonies. A colony of Pectinatella magnifica (a gelatinous blob) can be up to several feet in diameter large and under favorable conditions it exhibits an extreme growth rate. Recently European countries around rivers of Elbe, Oder, Danube, Rhine and Vltava have confirmed invasion of Pectinatella magnifica, including freshwater reservoirs in South Bohemia (Czech Republic). Our project (Czech Science Foundation, GAČR P503/12/0337) is focused onto biology and chemistry of Pectinatella magnifica. We monitor the organism occurrence in selected South Bohemia ponds and sandpits during the last years, collecting information about physical properties of surrounding water, and sampling the colonies for various analyses (classification, maps of secondary metabolites, toxicity tests). Because the gelatinous matrix is during the colony lifetime also a host for algae, bacteria and cyanobacteria (co-habitants), in this contribution, we also applied a high performance liquid chromatography (HPLC) method for determination of potentially present cyanobacterial toxins (microcystin-LR, microcystin-RR, nodularin). Results from the last 3-year monitoring show that these toxins are under limit of detection (LOD), so that they do not represent a danger yet. The final goal of our study is to assess toxicity risks related to fresh water resources invaded by Pectinatella magnifica, and to understand the process of invasion, which can enable to control it.

Keywords: cyanobacteria, fresh water resources, Pectinatella magnifica invasion, toxicity monitoring

Procedia PDF Downloads 236