Search results for: real estate price
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6330

Search results for: real estate price

4530 Fatigue Truck Modification Factor for Design Truck (CL-625)

Authors: Mohamad Najari, Gilbert Grondin, Marwan El-Rich

Abstract:

Design trucks in standard codes are selected based on the amount of damage they cause on structures-specifically bridges- and roads to represent the real traffic loads. Some limited numbers of trucks are run on a bridge one at a time and the damage on the bridge is recorded for each truck. One design track is also run on the same bridge “n” times -“n” is the number of trucks used previously- to calculate the damage of the design truck on the same bridge. To make these damages equal a reduction factor is needed for that specific design truck in the codes. As the limited number of trucks cannot be the exact representative of real traffic through the life of the structure, these reduction factors are not accurately calculated and they should be modified accordingly. Started on July 2004, the vehicle load data were collected in six weigh in motion (WIM) sites owned by Alberta Transportation for eight consecutive years. This database includes more than 200 million trucks. Having these data gives the opportunity to compare the effect of any standard fatigue trucks weigh and the real traffic load on the fatigue life of the bridges which leads to a modification for the fatigue truck factor in the code. To calculate the damage for each truck, the truck is run on the bridge, moment history of the detail under study is recorded, stress range cycles are counted, and then damage is calculated using available S-N curves. A 2000 lines FORTRAN code has been developed to perform the analysis and calculate the damages of the trucks in the database for all eight fatigue categories according to Canadian Institute of Steel Construction (CSA S-16). Stress cycles are counted using rain flow counting method. The modification factors for design truck (CL-625) are calculated for two different bridge configurations and ten span lengths varying from 1 m to 200 m. The two considered bridge configurations are single-span bridge and four span bridge. This was found to be sufficient and representative for a simply supported span, positive moment in end spans of bridges with two or more spans, positive moment in interior spans of three or more spans, and the negative moment at an interior support of multi-span bridges. The moment history of the mid span is recorded for single-span bridge and, exterior positive moment, interior positive moment, and support negative moment are recorded for four span bridge. The influence lines are expressed by a polynomial expression obtained from a regression analysis of the influence lines obtained from SAP2000. It is found that for design truck (CL-625) fatigue truck factor is varying from 0.35 to 0.55 depending on span lengths and bridge configuration. The detail results will be presented in the upcoming papers. This code can be used for any design trucks available in standard codes.

Keywords: bridge, fatigue, fatigue design truck, rain flow analysis, FORTRAN

Procedia PDF Downloads 521
4529 Study of the Effect of Sewing on Non Woven Textile Waste at Dry and Composite Scales

Authors: Wafa Baccouch, Adel Ghith, Xavier Legrand, Faten Fayala

Abstract:

Textile waste recycling has become a necessity considering the augmentation of the amount of waste generated each year and the ecological problems that landfilling and burning can cause. Textile waste can be recycled into many different forms according to its composition and its final utilization. Using this waste as reinforcement to composite panels is a new recycling area that is being studied. Compared to virgin fabrics, recycled ones present the disadvantage of having lower structural characteristics, when they are eco-friendly and with low cost. The objective of this work is transforming textile waste into composite material with good characteristic and low price. In this study, we used sewing as a method to improve the characteristics of the recycled textile waste in order to use it as reinforcement to composite material. Textile non-woven waste was afforded by a local textile recycling industry. Performances tests were evaluated using tensile testing machine and based on the testing direction for both reinforcements and composite panels; machine and transverse direction. Tensile tests were conducted on sewed and non sewed fabrics, and then they were used as reinforcements to composite panels via epoxy resin infusion method. Rule of mixtures is used to predict composite characteristics and then compared to experimental ones.

Keywords: composite material, epoxy resin, non woven waste, recycling, sewing, textile

Procedia PDF Downloads 588
4528 Bacteriological Safety of Sachet Drinking Water Sold in Benin City, Nigeria

Authors: Stephen Olusanmi Akintayo

Abstract:

Access to safe drinking water remains a major challenge in Nigeria, and where available, the quality of the water is often in doubt. An alternative to the inadequate clean drinking water is being found in treated drinking water packaged in electrically heated sealed nylon and commonly referred to as “sachet water”. “Sachet water” is a common thing in Nigeria as the selling price is within the reach of members of the low socio- economic class and the setting up of a production unit does not require huge capital input. The bacteriological quality of selected “sachet water” stored at room temperature over a period of 56 days was determined to evaluate the safety of the sachet drinking water. Test for the detection of coliform bacteria was performed, and the result showed no coliform bacteria that indicates the absence of fecal contamination throughout 56 days. Heterotrophic plate count (HPC) was done at an interval 14 days, and the samples showed HPC between 0 cfu/mL and 64 cfu/mL. The highest count was observed on day 1. The count decreased between day 1 and 28, while no growths were observed between day 42 and 56. The decrease in HPC suggested the presence of residual disinfectant in the water. The organisms isolated were identified as Staphylococcus epidermis and S. aureus. The presence of these microorganisms in sachet water is indicative for contamination during processing and handling.

Keywords: coliform, heterotrophic plate count, sachet water, Staphyloccocus aureus, Staphyloccocus epidermidis

Procedia PDF Downloads 342
4527 The Effectiveness of Cathodic Protection on Microbiologically Influenced Corrosion Control

Authors: S. Taghavi Kalajahi, A. Koerdt, T. Lund Skovhus

Abstract:

Cathodic protection (CP) is an electrochemical method to control and manage corrosion in different industries and environments. CP which is widely used, especially in buried and sub-merged environments, which both environments are susceptible to microbiologically influenced corrosion (MIC). Most of the standards recommend performing CP using -800 mV, however, if MIC threats are high or sulfate reducing bacteria (SRB) is present, the recommendation is to use more negative potentials for adequate protection of the metal. Due to the lack of knowledge and research on the effectiveness of CP on MIC, to the author’s best knowledge, there is no information about what MIC threat is and how much more negative potentials should be used enabling adequate protection and not overprotection (due to hydrogen embrittlement risk). Recently, the development and cheaper price of molecular microbial methods (MMMs) open the door for more effective investigations on the corrosion in the presence of microorganisms, along with other electrochemical methods and surface analysis. In this work, using MMMs, the gene expression of SRB biofilm under different potentials of CP will be investigated. The specific genes, such as pH buffering, metal oxidizing, etc., will be compared at different potentials, enabling to determine the precise potential that protect the metal effectively from SRB. This work is the initial step to be able to standardize the recommended potential under MIC condition, resulting better protection for the infrastructures.

Keywords: cathodic protection, microbiologically influenced corrosion, molecular microbial methods, sulfate reducing bacteria

Procedia PDF Downloads 94
4526 Food Security in Nigeria: An Examination of Food Availability and Accessibility in Nigeria

Authors: Okolo Chimaobi Valentine, Obidigbo Chizoba

Abstract:

As a basic physiology need, the threat to sufficient food production is the threat to human survival. Food security has been an issue that has gained global concern. This paper looks at the food security in Nigeria by assessing the availability of food and accessibility of the available food. The paper employed multiple linear regression technique and graphic trends of growth rates of relevant variables to show the situation of food security in Nigeria. Results of the tests revealed that population growth rate was higher than the growth rate of food availability in Nigeria for the earlier period of the study. Commercial bank credit to the agricultural sector, foreign exchange utilization for food and the Agricultural Credit Guarantee Scheme Fund (ACGSF) contributed significantly to food availability in Nigeria. Food prices grew at a faster rate than the average income level, making it difficult to access sufficient food. It implies that prior to the year 2012; there was insufficient food to feed the Nigerian populace. However, continued credit to the food and agricultural sector will ensure sustained and sufficient production of food in Nigeria. Microfinance banks should make sufficient credit available to the smallholder farmer. The government should further control and subsidize the rising price of food to make it more accessible by the people.

Keywords: food, accessibility, availability, security

Procedia PDF Downloads 377
4525 The Impacts of Gentrification in Transit-Oriented Development on Mode Choice and Equity

Authors: Steve Apell

Abstract:

Transit-oriented development (TOD) is a popular intervention for local governments endeavoring to reduce auto-dependency and the adverse effects of sprawl. At the same time, American households such as the millennial generation, are shifting their residential preferences from the suburbs to the central city. These changes have intensified demand for TOD housing which generates high rents. This leads to displacement of low-income, transit-dependent households by more affluent middle class families. Critics argue that, the effectiveness of TOD might be compromised as newer affluent residents drive more and use transit less. However, there has not been a comprehensive study to test this hypothesis. Using census data ( 1990 – 2012) from six metropolitans areas, this research investigated if block groups within one-mile radius of TOD are gentrifying. Our findings reveal that the price of housing and number of college graduates, increased more in TODs compared to the metropolitan area. Similarly, the percentage of immigrants increased in TOD, while those of blacks and whites declined. Most importantly, TOD residents generally commuted less by car, while transit use increased in some metropolitan areas. TOD in the south of the United States registered higher cost of housing and less transit use. These findings have significant implications for the future of equitable and sustainable transportation policy.

Keywords: commuting, equity, gentrification, mode choice, transit oriented development

Procedia PDF Downloads 370
4524 The Complexities of Designing a Learning Programme in Higher Education with the End-User in Mind

Authors: Andre Bechuke

Abstract:

The quality of every learning programme in Higher Education (HE) is dependent on the planning, design, and development of the curriculum decisions. These curriculum development decisions are highly influenced by the knowledge of the end-user, who are not always just the students. When curriculum experts plan, design and develop learning programmes, they always have the end-users in mind throughout the process. Without proper knowledge of the end-user(s), the design and development of a learning programme might be flawed. Curriculum experts often struggle to determine who the real end-user is. As such, it is even more challenging to establish what needs to be known about the end user that should inform the plan, design, and development of a learning programme. This research sought suggest approaches to guide curriculum experts to identify the end-user(s), taking into consideration the pressure and influence other agencies and structures or stakeholders (industry, students, government, universities context, lecturers, international communities, professional regulatory bodies) have on the design of a learning programme and the graduates of the programmes. Considering the influence of these stakeholders, which is also very important, the task of deciding who the real end-user of the learning programme becomes very challenging. This study makes use of criteria 1 and 18 of the Council on Higher Education criteria for programme accreditation to guide the process of identifying the end-users when developing a learning programme. Criterion 1 suggests that designers must ensure that the programme is consonant with the institution’s mission, forms part of institutional planning and resource allocation, meets national requirements and the needs of students and other stakeholders, and is intellectually credible. According to criterion 18, in designing a learning programme, steps must be taken to enhance the employability of students and alleviate shortages of expertise in relevant fields. In conclusion, there is hardly ever one group of end-users to be considered for developing a learning programme, and the notion that students are the end-users is not true, especially when the graduates are unable to use the qualification for employment.

Keywords: council on higher education, curriculum design and development, higher education, learning programme

Procedia PDF Downloads 83
4523 Target-Triggered DNA Motors and their Applications to Biosensing

Authors: Hongquan Zhang

Abstract:

Inspired by endogenous protein motors, researchers have constructed various synthetic DNA motors based on the specificity and predictability of Watson-Crick base pairing. However, the application of DNA motors to signal amplification and biosensing is limited because of low mobility and difficulty in real-time monitoring of the walking process. The objective of our work was to construct a new type of DNA motor termed target-triggered DNA motors that can walk for hundreds of steps in response to a single target binding event. To improve the mobility and processivity of DNA motors, we used gold nanoparticles (AuNPs) as scaffolds to build high-density, three-dimensional tracks. Hundreds of track strands are conjugated to a single AuNP. To enable DNA motors to respond to specific protein and nucleic acid targets, we adapted the binding-induced DNA assembly into the design of the target-triggered DNA motors. In response to the binding of specific target molecules, DNA motors are activated to autonomously walk along AuNP, which is powered by a nicking endonuclease or DNAzyme-catalyzed cleavage of track strands. Each moving step restores the fluorescence of a dye molecule, enabling monitoring of the operation of DNA motors in real time. The motors can translate a single binding event into the generation of hundreds of oligonucleotides from a single nanoparticle. The motors have been applied to amplify the detection of proteins and nucleic acids in test tubes and live cells. The motors were able to detect low pM concentrations of specific protein and nucleic acid targets in homogeneous solutions without the need for separation. Target-triggered DNA motors are significant for broadening applications of DNA motors to molecular sensing, cell imagining, molecular interaction monitoring, and controlled delivery and release of therapeutics.

Keywords: biosensing, DNA motors, gold nanoparticles, signal amplification

Procedia PDF Downloads 85
4522 Localized Detection of ᴅ-Serine by Using an Enzymatic Amperometric Biosensor and Scanning Electrochemical Microscopy

Authors: David Polcari, Samuel C. Perry, Loredano Pollegioni, Matthias Geissler, Janine Mauzeroll

Abstract:

ᴅ-serine acts as an endogenous co-agonist for N-methyl-ᴅ-aspartate receptors in neuronal synapses. This makes it a key component in the development and function of a healthy brain, especially given its role in several neurodegenerative diseases such as Alzheimer’s disease and dementia. Despite such clear research motivations, the primary site and mechanism of ᴅ-serine release is still currently unclear. For this reason, we are developing a biosensor for the detection of ᴅ-serine utilizing a microelectrode in combination with a ᴅ-amino acid oxidase enzyme, which produces stoichiometric quantities of hydrogen peroxide in response to ᴅ-serine. For the fabrication of a biosensor with good selectivity, we use a permselective poly(meta-phenylenediamine) film to ensure only the target molecule is reacted, according to the size exclusion principle. In this work, we investigated the effect of the electrodeposition conditions used on the biosensor’s response time and selectivity. Careful optimization of the fabrication process allowed for enhanced biosensor response time. This allowed for the real time sensing of ᴅ-serine in a bulk solution, and also provided in means to map the efflux of ᴅ-serine in real time. This was done using scanning electrochemical microscopy (SECM) with the optimized biosensor to measure localized release of ᴅ-serine from an agar filled glass capillary sealed in an epoxy puck, which acted as a model system. The SECM area scan simultaneously provided information regarding the rate of ᴅ-serine flux from the model substrate, as well as the size of the substrate itself. This SECM methodology, which provides high spatial and temporal resolution, could be useful to investigate the primary site and mechanism of ᴅ-serine release in other biological samples.

Keywords: ᴅ-serine, enzymatic biosensor, microelectrode, scanning electrochemical microscopy

Procedia PDF Downloads 228
4521 An Influence of Marketing Mix on Hotel Booking Decision: Japanese Senior Traveler Case

Authors: Kingkan Pongsiri

Abstract:

The study of marketing mix influencing on hotel booking decision making: Japanese senior traveler case aims to study the individual factors that are involved in the decision-making reservation for Japanese elderly travelers. Then, it aims to study other factors that influence the decision of tourists booking elderly Japanese people. This is a quantitative research methods, total of 420 completed questionnaires were collect via a Non-Probability sampling techniques. The study found that the majority of samples were female, 53.3 percent of 224 people aged between 66-70 years were 197, representing a 46.9 percent majority, the marital status of marriage is 212 per cent.50.5. Majority of samples have a bachelor degree of education with number of 326 persons (77.6 percentages) 50 percentages of samples (210 people) have monthly income in between 1,501-2,000 USD. The Samples mostly have a length of stay in a short period between 1-14 days counted as 299 people which representing 71.2 percentages of samples. The senior Japanese tourists apparently sensitive to the factors of products/services the most. Then they seem to be sensitive to the price, the marketing promotion and people, respectively. There are two factors identified as moderately influence to the Japanese senior tourists are places or distribution channels and physical evidences.

Keywords: Japanese senior traveler, marketing mix, senior tourist, hotel booking

Procedia PDF Downloads 300
4520 Strategic Development of Urban Environmental Management Base on Good Governance - Case study of (Waste Management of Tehran)

Authors: A. Farhad Sadri, B. Ali Farhadi, C. Nasim Shalamzari

Abstract:

Waste management is a principle of urban and environmental governance. Waste management in Tehran metropolitan requires good strategies for better governance. Using of good urban governance principles together with eight main indexes can be an appropriate base for this aim. One of the reasonable tools in this field is usage of SWOT methods which provides possibility of comparing the opportunities, threats, weaknesses, and strengths by using IFE and EFE matrixes. The results of the above matrixes, respectively 2.533 and 2.403, show that management system of Tehran metropolitan wastes has performed weak regarding to internal factors and has not have good performance regarding using the opportunities and dealing with threats. In this research, prioritizing and describing the real value of each 24 strategies in waste management in Tehran metropolitan have been surveyed considering good governance derived from Quantitative Strategic Planning Management (QSPM) by using Kolomogrof-Smirnoff by 1.549 and significance level of 0.073 in order to define normalization of final values and all of the strategies utilities and Variance Analysis of ANOVA has been calculated for all SWOT strategies. Duncan’s test results regarding four WT, ST, WO, and SO strategies show no significant difference. In addition to mean comparison by Duncan method in this research, LSD (Lowest Significant Difference test) has been used by probability of 5% and finally, 7 strategies and final model of Tehran metropolitan waste management strategy have been defined. Increasing the confidence of people with transparency of budget, developing and improving the legal structure (rule-oriented and law governance, more responsibility about requirements of private sectors, increasing recycling rates and real effective participation of people and NGOs to improve waste management (contribution) and etc, are main available strategies which have been achieved based on good urban governance management principles.

Keywords: waste, strategy, environmental management, urban good governance, SWOT

Procedia PDF Downloads 323
4519 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.

Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions

Procedia PDF Downloads 178
4518 Evaluation of the Cytotoxicity and Cellular Uptake of a Cyclodextrin-Based Drug Delivery System for Cancer Therapy

Authors: Caroline Mendes, Mary McNamara, Orla Howe

Abstract:

Drug delivery systems are proposed for use in cancer treatment to specifically target cancer cells and deliver a therapeutic dose without affecting normal cells. For that purpose, the use of folate receptors (FR) can be considered a key strategy, since they are commonly over-expressed in cancer cells. In this study, cyclodextrins (CD) have being used as vehicles to target FR and deliver the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within their cavities. Here, β-CD has been modified using folic acid so as to specifically target the FR. Thus, this drug delivery system consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 15.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 16.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 10.5 for A549 and 132.6 µM ± 16.1 and 288.1 µM ± 26.3 for BEAS-2B. These results demonstrate that free MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug. The use of cell imaging by confocal microscopy has allowed visualisation of FR targeting in cancer cells, as well as the identification of the interlisation pathway of the drug. Hence, the cellular uptake and internalisation process of this drug delivery system is being addressed.

Keywords: cancer treatment, cyclodextrins, drug delivery, folate receptors, reduced folate carriers

Procedia PDF Downloads 311
4517 Trauma in the Unconsoled: A Crisis of the Self

Authors: Assil Ghariri

Abstract:

This article studies the process of rewriting the self through memory in Kazuo Ishiguro’s novel, the Unconsoled (1995). It deals with the journey that the protagonist Mr. Ryder takes through the unconscious, in search for his real self, in which trauma stands as an obstacle. The article uses Carl Jung’s theory of archetypes. Trauma, in this article, is discussed as one of the true obstacles of the unconscious that prevent people from realizing the truth about their selves.

Keywords: Carl Jung, Kazuo Ishiguro, memory, trauma

Procedia PDF Downloads 404
4516 Development of Advanced Virtual Radiation Detection and Measurement Laboratory (AVR-DML) for Nuclear Science and Engineering Students

Authors: Lily Ranjbar, Haori Yang

Abstract:

Online education has been around for several decades, but the importance of online education became evident after the COVID-19 pandemic. Eventhough the online delivery approach works well for knowledge building through delivering content and oversight processes, it has limitations in developing hands-on laboratory skills, especially in the STEM field. During the pandemic, many education institutions faced numerous challenges in delivering lab-based courses, especially in the STEM field. Also, many students worldwide were unable to practice working with lab equipment due to social distancing or the significant cost of highly specialized equipment. The laboratory plays a crucial role in nuclear science and engineering education. It can engage students and improve their learning outcomes. In addition, online education and virtual labs have gained substantial popularity in engineering and science education. Therefore, developing virtual labs is vital for institutions to deliver high-class education to their students, including their online students. The School of Nuclear Science and Engineering (NSE) at Oregon State University, in partnership with SpectralLabs company, has developed an Advanced Virtual Radiation Detection and Measurement Lab (AVR-DML) to offer a fully online Master of Health Physics program. It was essential for us to use a system that could simulate nuclear modules that accurately replicate the underlying physics, the nature of radiation and radiation transport, and the mechanics of the instrumentations used in the real radiation detection lab. It was all accomplished using a Realistic, Adaptive, Interactive Learning System (RAILS). RAILS is a comprehensive software simulation-based learning system for use in training. It is comprised of a web-based learning management system that is located on a central server, as well as a 3D-simulation package that is downloaded locally to user machines. Users will find that the graphics, animations, and sounds in RAILS create a realistic, immersive environment to practice detecting different radiation sources. These features allow students to coexist, interact and engage with a real STEM lab in all its dimensions. It enables them to feel like they are in a real lab environment and to see the same system they would in a lab. Unique interactive interfaces were designed and developed by integrating all the tools and equipment needed to run each lab. These interfaces provide students full functionality for data collection, changing the experimental setup, and live data collection with real-time updates for each experiment. Students can manually do all experimental setups and parameter changes in this lab. Experimental results can then be tracked and analyzed in an oscilloscope, a multi-channel analyzer, or a single-channel analyzer (SCA). The advanced virtual radiation detection and measurement laboratory developed in this study enabled the NSE school to offer a fully online MHP program. This flexibility of course modality helped us to attract more non-traditional students, including international students. It is a valuable educational tool as students can walk around the virtual lab, make mistakes, and learn from them. They have an unlimited amount of time to repeat and engage in experiments. This lab will also help us speed up training in nuclear science and engineering.

Keywords: advanced radiation detection and measurement, virtual laboratory, realistic adaptive interactive learning system (rails), online education in stem fields, student engagement, stem online education, stem laboratory, online engineering education

Procedia PDF Downloads 92
4515 Planning Railway Assets Renewal with a Multiobjective Approach

Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida

Abstract:

Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.

Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling

Procedia PDF Downloads 146
4514 An Association between Stock Index and Macro Economic Variables in Bangladesh

Authors: Shamil Mardi Al Islam, Zaima Ahmed

Abstract:

The aim of this article is to explore whether certain macroeconomic variables such as industrial index, inflation, broad money, exchange rate and deposit rate as a proxy for interest rate are interlinked with Dhaka stock price index (DSEX index) precisely after the introduction of new index by Dhaka Stock Exchange (DSE) since January 2013. Bangladesh stock market has experienced rapid growth since its inception. It might not be a very well-developed capital market as compared to its neighboring counterparts but has been a strong avenue for investment and resource mobilization. The data set considered consists of monthly observations, for a period of four years from January 2013 to June 2018. Findings from cointegration analysis suggest that DSEX and macroeconomic variables have a significant long-run relationship. VAR decomposition based on VAR estimated indicates that money supply explains a significant portion of variation of stock index whereas, inflation is found to have the least impact. Impact of industrial index is found to have a low impact compared to the exchange rate and deposit rate. Policies should there aim to increase industrial production in order to enhance stock market performance. Further reasonable money supply should be ensured by authorities to stimulate stock market performance.

Keywords: deposit rate, DSEX, industrial index, VAR

Procedia PDF Downloads 165
4513 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion

Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty

Abstract:

Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.

Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors

Procedia PDF Downloads 155
4512 Sensing of Cancer DNA Using Resonance Frequency

Authors: Sungsoo Na, Chanho Park

Abstract:

Lung cancer is one of the most common severe diseases driving to the death of a human. Lung cancer can be divided into two cases of small-cell lung cancer (SCLC) and non-SCLC (NSCLC), and about 80% of lung cancers belong to the case of NSCLC. From several studies, the correlation between epidermal growth factor receptor (EGFR) and NSCLCs has been investigated. Therefore, EGFR inhibitor drugs such as gefitinib and erlotinib have been used as lung cancer treatments. However, the treatments result showed low response (10~20%) in clinical trials due to EGFR mutations that cause the drug resistance. Patients with resistance to EGFR inhibitor drugs usually are positive to KRAS mutation. Therefore, assessment of EGFR and KRAS mutation is essential for target therapies of NSCLC patient. In order to overcome the limitation of conventional therapies, overall EGFR and KRAS mutations have to be monitored. In this work, the only detection of EGFR will be presented. A variety of techniques has been presented for the detection of EGFR mutations. The standard detection method of EGFR mutation in ctDNA relies on real-time polymerase chain reaction (PCR). Real-time PCR method provides high sensitive detection performance. However, as the amplification step increases cost effect and complexity increase as well. Other types of technology such as BEAMing, next generation sequencing (NGS), an electrochemical sensor and silicon nanowire field-effect transistor have been presented. However, those technologies have limitations of low sensitivity, high cost and complexity of data analyzation. In this report, we propose a label-free and high-sensitive detection method of lung cancer using quartz crystal microbalance based platform. The proposed platform is able to sense lung cancer mutant DNA with a limit of detection of 1nM.

Keywords: cancer DNA, resonance frequency, quartz crystal microbalance, lung cancer

Procedia PDF Downloads 233
4511 Impact of Charging PHEV at Different Penetration Levels on Power System Network

Authors: M. R. Ahmad, I. Musirin, M. M. Othman, N. A. Rahmat

Abstract:

Plug-in Hybrid-Electric Vehicle (PHEV) has gained immense popularity in recent years. PHEV offers numerous advantages compared to the conventional internal-combustion engine (ICE) vehicle. Millions of PHEVs are estimated to be on the road in the USA by 2020. Uncoordinated PHEV charging is believed to cause severe impacts to the power grid; i.e. feeders, lines and transformers overload and voltage drop. Nevertheless, improper PHEV data model used in such studies may cause the findings of their works is in appropriated. Although smart charging is more attractive to researchers in recent years, its implementation is not yet attainable on the street due to its requirement for physical infrastructure readiness and technology advancement. As the first step, it is finest to study the impact of charging PHEV based on real vehicle travel data from National Household Travel Survey (NHTS) and at present charging rate. Due to the lack of charging station on the street at the moment, charging PHEV at home is the best option and has been considered in this work. This paper proposed a technique that comprehensively presents the impact of charging PHEV on power system networks considering huge numbers of PHEV samples with its traveling data pattern. Vehicles Charging Load Profile (VCLP) is developed and implemented in IEEE 30-bus test system that represents a portion of American Electric Power System (Midwestern US). Normalization technique is used to correspond to real time loads at all buses. Results from the study indicated that charging PHEV using opportunity charging will have significant impacts on power system networks, especially whereas bigger battery capacity (kWh) is used as well as for higher penetration level.

Keywords: plug-in hybrid electric vehicle, transportation electrification, impact of charging PHEV, electricity demand profile, load profile

Procedia PDF Downloads 288
4510 Controlling Drone Flight Missions through Natural Language Processors Using Artificial Intelligence

Authors: Sylvester Akpah, Selasi Vondee

Abstract:

Unmanned Aerial Vehicles (UAV) as they are also known, drones have attracted increasing attention in recent years due to their ubiquitous nature and boundless applications in the areas of communication, surveying, aerial photography, weather forecasting, medical delivery, surveillance amongst others. Operated remotely in real-time or pre-programmed, drones can fly autonomously or on pre-defined routes. The application of these aerial vehicles has successfully penetrated the world due to technological evolution, thus a lot more businesses are utilizing their capabilities. Unfortunately, while drones are replete with the benefits stated supra, they are riddled with some problems, mainly attributed to the complexities in learning how to master drone flights, collision avoidance and enterprise security. Additional challenges, such as the analysis of flight data recorded by sensors attached to the drone may take time and require expert help to analyse and understand. This paper presents an autonomous drone control system using a chatbot. The system allows for easy control of drones using conversations with the aid of Natural Language Processing, thus to reduce the workload needed to set up, deploy, control, and monitor drone flight missions. The results obtained at the end of the study revealed that the drone connected to the chatbot was able to initiate flight missions with just text and voice commands, enable conversation and give real-time feedback from data and requests made to the chatbot. The results further revealed that the system was able to process natural language and produced human-like conversational abilities using Artificial Intelligence (Natural Language Understanding). It is recommended that radio signal adapters be used instead of wireless connections thus to increase the range of communication with the aerial vehicle.

Keywords: artificial ntelligence, chatbot, natural language processing, unmanned aerial vehicle

Procedia PDF Downloads 143
4509 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing

Authors: Jonathan Martino, Kristof Harri

Abstract:

In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.

Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration

Procedia PDF Downloads 271
4508 CRYPTO COPYCAT: A Fashion Centric Blockchain Framework for Eliminating Fashion Infringement

Authors: Magdi Elmessiry, Adel Elmessiry

Abstract:

The fashion industry represents a significant portion of the global gross domestic product, however, it is plagued by cheap imitators that infringe on the trademarks which destroys the fashion industry's hard work and investment. While eventually the copycats would be found and stopped, the damage has already been done, sales are missed and direct and indirect jobs are lost. The infringer thrives on two main facts: the time it takes to discover them and the lack of tracking technologies that can help the consumer distinguish them. Blockchain technology is a new emerging technology that provides a distributed encrypted immutable and fault resistant ledger. Blockchain presents a ripe technology to resolve the infringement epidemic facing the fashion industry. The significance of the study is that a new approach leveraging the state of the art blockchain technology coupled with artificial intelligence is used to create a framework addressing the fashion infringement problem. It transforms the current focus on legal enforcement, which is difficult at best, to consumer awareness that is far more effective. The framework, Crypto CopyCat, creates an immutable digital asset representing the actual product to empower the customer with a near real time query system. This combination emphasizes the consumer's awareness and appreciation of the product's authenticity, while provides real time feedback to the producer regarding the fake replicas. The main findings of this study are that implementing this approach can delay the fake product penetration of the original product market, thus allowing the original product the time to take advantage of the market. The shift in the fake adoption results in reduced returns, which impedes the copycat market and moves the emphasis to the original product innovation.

Keywords: fashion, infringement, blockchain, artificial intelligence, textiles supply chain

Procedia PDF Downloads 261
4507 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 286
4506 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 143
4505 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography

Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner

Abstract:

Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.

Keywords: CBCT, C-arm, reconstruction, trajectory optimization

Procedia PDF Downloads 132
4504 INCIPIT-CRIS: A Research Information System Combining Linked Data Ontologies and Persistent Identifiers

Authors: David Nogueiras Blanco, Amir Alwash, Arnaud Gaudinat, René Schneider

Abstract:

At a time when the access to and the sharing of information are crucial in the world of research, the use of technologies such as persistent identifiers (PIDs), Current Research Information Systems (CRIS), and ontologies may create platforms for information sharing if they respond to the need of disambiguation of their data by assuring interoperability inside and between other systems. INCIPIT-CRIS is a continuation of the former INCIPIT project, whose goal was to set up an infrastructure for a low-cost attribution of PIDs with high granularity based on Archival Resource Keys (ARKs). INCIPIT-CRIS can be interpreted as a logical consequence and propose a research information management system developed from scratch. The system has been created on and around the Schema.org ontology with a further articulation of the use of ARKs. It is thus built upon the infrastructure previously implemented (i.e., INCIPIT) in order to enhance the persistence of URIs. As a consequence, INCIPIT-CRIS aims to be the hinge between previously separated aspects such as CRIS, ontologies and PIDs in order to produce a powerful system allowing the resolution of disambiguation problems using a combination of an ontology such as Schema.org and unique persistent identifiers such as ARK, allowing the sharing of information through a dedicated platform, but also the interoperability of the system by representing the entirety of the data as RDF triplets. This paper aims to present the implemented solution as well as its simulation in real life. We will describe the underlying ideas and inspirations while going through the logic and the different functionalities implemented and their links with ARKs and Schema.org. Finally, we will discuss the tests performed with our project partner, the Swiss Institute of Bioinformatics (SIB), by the use of large and real-world data sets.

Keywords: current research information systems, linked data, ontologies, persistent identifier, schema.org, semantic web

Procedia PDF Downloads 137
4503 Academic Success, Problem-Based Learning and the Middleman: The Community Voice

Authors: Isabel Medina, Mario Duran

Abstract:

Although Problem-based learning provides students with multiple opportunities for rigorous instructional experiences in which students are challenged to address problems in the community; there are still gaps in connecting community leaders to the PBL process. At a south Texas high school, community participation serves as an integral component of the PBL process. Problem-based learning (PBL) has recently gained momentum due to the increase in global communities that value collaboration and critical thinking. As an instructional approach, PBL engages high school students in meaningful learning experiences. Furthermore, PBL focuses on providing students with a connection to real-world situations that require effective peer collaboration. For PBL leaders, providing students with a meaningful process is as important as the final PBL outcome. To achieve this goal, STEM high school strategically created a space for community involvement to be woven within the PBL fabric. This study examines the impact community members had on PBL students attending a STEM high school in South Texas. At STEM High School, community members represent a support system that works through the PBL process to ensure students receive real-life mentoring from business and industry leaders situated in the community. A phenomenological study using a semi-structured approach was used to collect data about students’ perception of community involvement within the PBL process for one South Texas high school. In our proposed presentation, we will discuss how community involvement in the PBL process academically impacted the educational experience of high school students at STEM high school. We address the instructional concerns PBL critics have with the lack of direct instruction, by providing a representation of how STEM high school utilizes community members to assist in impacting the academic experience of students.

Keywords: phenomenological, STEM education, student engagement, community involvement

Procedia PDF Downloads 92
4502 Optimization and Automation of Functional Testing with White-Box Testing Method

Authors: Reyhaneh Soltanshah, Hamid R. Zarandi

Abstract:

In order to be more efficient in industries that are related to computer systems, software testing is necessary despite spending time and money. In the embedded system software test, complete knowledge of the embedded system architecture is necessary to avoid significant costs and damages. Software tests increase the price of the final product. The aim of this article is to provide a method to reduce time and cost in tests based on program structure. First, a complete review of eleven white box test methods based on ISO/IEC/IEEE 29119 2015 and 2021 versions has been done. The proposed algorithm is designed using two versions of the 29119 standards, and some white-box testing methods that are expensive or have little coverage have been removed. On each of the functions, white box test methods were applied according to the 29119 standard and then the proposed algorithm was implemented on the functions. To speed up the implementation of the proposed method, the Unity framework has been used with some changes. Unity framework can be used in embedded software testing due to its open source and ability to implement white box test methods. The test items obtained from these two approaches were evaluated using a mathematical ratio, which in various software mining reduced between 50% and 80% of the test cost and reached the desired result with the minimum number of test items.

Keywords: embedded software, reduce costs, software testing, white-box testing

Procedia PDF Downloads 58
4501 Predicting Relative Performance of Sector Exchange Traded Funds Using Machine Learning

Authors: Jun Wang, Ge Zhang

Abstract:

Machine learning has been used in many areas today. It thrives at reviewing large volumes of data and identifying patterns and trends that might not be apparent to a human. Given the huge potential benefit and the amount of data available in the financial market, it is not surprising to see machine learning applied to various financial products. While future prices of financial securities are extremely difficult to forecast, we study them from a different angle. Instead of trying to forecast future prices, we apply machine learning algorithms to predict the direction of future price movement, in particular, whether a sector Exchange Traded Fund (ETF) would outperform or underperform the market in the next week or in the next month. We apply several machine learning algorithms for this prediction. The algorithms are Linear Discriminant Analysis (LDA), k-Nearest Neighbors (KNN), Decision Tree (DT), Gaussian Naive Bayes (GNB), and Neural Networks (NN). We show that these machine learning algorithms, most notably GNB and NN, have some predictive power in forecasting out-performance and under-performance out of sample. We also try to explore whether it is possible to utilize the predictions from these algorithms to outperform the buy-and-hold strategy of the S&P 500 index. The trading strategy to explore out-performance predictions does not perform very well, but the trading strategy to explore under-performance predictions can earn higher returns than simply holding the S&P 500 index out of sample.

Keywords: machine learning, ETF prediction, dynamic trading, asset allocation

Procedia PDF Downloads 101