Search results for: maneuvering target
2305 Platform Integration for High-Throughput Functional Screening Applications
Authors: Karolis Leonavičius, Dalius Kučiauskas, Dangiras Lukošius, Arnoldas Jasiūnas, Kostas Zdanys, Rokas Stanislovas, Emilis Gegevičius, Žana Kapustina, Juozas Nainys
Abstract:
Screening throughput is a common bottleneck in many research areas, including functional genomics, drug discovery, and directed evolution. High-throughput screening techniques can be classified into two main categories: (i) affinity-based screening and (ii) functional screening. The first one relies on binding assays that provide information about the affinity of a test molecule for a target binding site. Binding assays are relatively easy to establish; however, they reveal no functional activity. In contrast, functional assays show an effect triggered by the interaction of a ligand at a target binding site. Functional assays might be based on a broad range of readouts, such as cell proliferation, reporter gene expression, downstream signaling, and other effects that are a consequence of ligand binding. Screening of large cell or gene libraries based on direct activity rather than binding affinity is now a preferred strategy in many areas of research as functional assays more closely resemble the context where entities of interest are anticipated to act. Droplet sorting is the basis of high-throughput functional biological screening, yet its applicability is limited due to the technical complexity of integrating high-performance droplet analysis and manipulation systems. As a solution, the Droplet Genomics Styx platform enables custom droplet sorting workflows, which are necessary for the development of early-stage or complex biological therapeutics or industrially important biocatalysts. The poster will focus on the technical design considerations of Styx in the context of its application spectra.Keywords: functional screening, droplet microfluidics, droplet sorting, dielectrophoresis
Procedia PDF Downloads 1352304 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 1102303 Assessment of Student Attitudes to Higher Education Service Measures: The Development of a Framework for Private Higher Education Institutions in Malaysia
Authors: Farrah Anne Robert, Robert McClelland, Seng Kiat Kok
Abstract:
Higher education service quality is widely regarded as key factors in the long term success of a higher education institution in attracting and retaining students. This research attempted to establish the impact of service quality on recruiting and retaining students in private higher education institutions (PHEI’s). 501 local and international students responded to a 49 item educational service measure questionnaire from PHEIs in Kuala Lumpur and Selangor, two states in Malaysia which together account for 60% of private colleges in Malaysia. Results from this research revealed that, inter-alia, facilities, employability, management and administration services, academic staff competence, curriculum and student overall experiences were key driving factors in attracting and retaining students. Lack of “campus-like building” facilities and lecturer’s effectiveness in delivering lectures were keys concerns in the provision of service quality by PHEI’s in Malaysia. Over the last decade, the Government of Malaysia has set a target of recruiting 200,000 international students to study in Malaysia by PHEI’s and PHEI’s have failed to achieve this target. This research suggests that service quality issues identified above are impacting efforts to recruit and retain both local and international students by PHEIs. The researcher recommends that further and detailed research be carried on these factors and its impact on recruitment and retention. PHEI administrators can benefit from this research by conducting an evaluation of service measures delivered in their institutions and take corrective measures. Prospective students can benefit from this study by including in their choice factors the “service quality delivery” of PHEI’s when deciding to enroll in a particular PHEI.Keywords: higher education, recruitment, retention, service quality
Procedia PDF Downloads 3782302 Optimization for Guide RNA and CRISPR/Cas9 System Nanoparticle Mediated Delivery into Plant Cell for Genome Editing
Authors: Andrey V. Khromov, Antonida V. Makhotenko, Ekaterina A. Snigir, Svetlana S. Makarova, Natalia O. Kalinina, Valentin V. Makarov, Mikhail E. Taliansky
Abstract:
Due to its simplicity, CRISPR/Cas9 has become widely used and capable of inducing mutations in the genes of organisms of various kingdoms. The aim of this work was to develop applications for the efficient modification of DNA coding sequences of phytoene desaturase (PDS), coilin and vacuolar invertase (Solanum tuberosum) genes, and to develop a new nanoparticles carrier efficient technology to deliver the CRISPR/Cas9 system for editing the plant genome. For each of the genes - coilin, PDS and vacuolar invertase, five single RNA guide (sgRNAs) were synthesized. To determine the most suitable nanoplatform, two types of NP platforms were used: magnetic NPs (MNPS) and gold NPs (AuNPs). To test the penetration efficiency, they were functionalized with fluorescent agents - BSA * FITS and GFP, as well as labeled Cy3 small-sized RNA. To measure the efficiency, a fluorescence and confocal microscopy were used. It was shown that the best of these options were AuNP - both in the case of proteins and in the case of RNA. The next step was to check the possibility of delivering components of the CRISPR/Cas9 system to plant cells for editing target genes. AuNPs were functionalized with a ribonucleoprotein complex consisting of Cas9 and corresponding to target genes sgRNAs, and they were biolistically bombarded to axillary buds and apical meristems of potato plants. After the treatment by the best NP carrier, potato meristems were grown to adult plants. DNA isolated from this plants was sent to a preliminary fragment of the analysis to screen out the non-transformed samples, and then to the NGS. The present work was carried out with the financial support from the Russian Science Foundation (grant No. 16-16-04019).Keywords: biobombardment, coilin, CRISPR/Cas9, nanoparticles, NPs, PDS, sgRNA, vacuolar invertase
Procedia PDF Downloads 3162301 Evaluation of a Chitin Synthesis Inhibitor Novaluron in the Shrimp Palaemon Adspersus: Impact on Ecdysteroids and Chitin Contents
Authors: Hinda Berghiche, Hamida Benradia, Noureddine Soltani
Abstract:
Pesticides are widely used in crop production and are known to induce a major contamination of ecosystems especially in aquatic environments. The leaching of a large amount of pollutants derived from agricultural activities (fertilizers, pesticides) might contaminate rivers which diverse into the likes and estuarine and coastal environments affecting several organisms such as crustacean species. In this context, there is searched for new selective insecticides with minimal toxic effects on the environment and human health such as growth insect regulators (GIRs). The current study aimed to examine the impact of novaluron (CE 20%), a potent benzoylphenylurea derivative insecticide on mosquito larvae, against non-target shrimp, Palaemon adspersus (Decapoda, Palaemonidae). The compound was tested at two concentrations (0.91 mg/L and 4.30 mg/L) corresponding respectively to the LC50 and LC90 determined against fourth-instar larvae of Culiseta longiareolata (Diptera, Culicidae). The molting hormone titer was determined in the haemolymph by an enzyme-immunoassay, while chitin was measured in peripheral integument at different stages during the molting cycle. Under normal conditions, the haemolymphatic ecdysteroid concentrations increased during the molting cycle to reach peak at stage D. In the treated series, we note absence of the peak at stage D and an increase at stages B, C and D as compared to the controls. Concerning the chitin amounts, we observe an increase from stage A to stage C followed by a decrease at stage D. Exposition of shrimps to novaluron resulted in a significant decrease of values at all molting stages with a dose-response effect. Thus, the insecticide can present secondary effects on this non-target arthropod species.Keywords: toxicology, novaluron, crustacean, palaemon adspersus, ecdysteroids, cuticle, chitin
Procedia PDF Downloads 2492300 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)
Authors: Salvatore Luongo, Carlo Luongo
Abstract:
This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilitiesKeywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification
Procedia PDF Downloads 2852299 Dual Electrochemical Immunosensor for IL-13Rα2 and E-Cadherin Determination in Cell, Serum and Tissues from Cancer Patients
Authors: Amira ben Hassine, A. Valverde, V. Serafín, C. Muñoz-San Martín, M. Garranzo-Asensio, M. Gamella, R. Barderas, M. Pedrero, N. Raouafi, S. Campuzano, P. Yáñez-Sedeño, J. M. Pingarrón
Abstract:
This work describes the development of a dual electrochemical immunosensing platform for accurate determination of two target proteins, IL-13 Receptor α2 (IL-13Rα2) and E-cadherin (E-cad). The proposed methodology is based on the use of sandwich immunosensing approaches (involving horseradish peroxidase-labeled detector antibodies) implemented onto magnetic microbeads (MBs) and amperometric transduction at screen-printed dual carbon electrodes (SPdCEs). The magnetic bioconjugates were captured onto SPdCEs and the amperometric transduction was performed using the H2O2/hydroquinone (HQ) system. Under optimal experimental conditions, the developed bio platform demonstrates linear concentration ranges of 1.0–25 and 5.0-100 ng mL-1, detection limits of 0.28 and 1.04 ng mL-1 for E-cad and IL-13Rα2, respectively, and excellent selectivity against other non-target proteins. The developed immuno-platform also offers a good reproducibility among amperometric responses provided by nine different sensors constructed in the same manner (Relative Standard Deviation values of 3.1% for E-cad and 4.3% for IL-13Rα2). Moreover, obtained results confirm the practical applicability of this bio-platform for the accurate determination of the endogenous levels of both extracellular receptors in colon cancer cells (both intact and lysed) with different metastatic potential and serum and tissues from patients diagnosed with colorectal cancer at different grades. Interesting features in terms of, simplicity, speed, portability and sample amount required to provide quantitative results, make this immuno-platform more compatible than conventional methodologies with the clinical diagnosis and prognosis at the point of care.Keywords: electrochemistry, mmunosensors, biosensors, E-cadherin, IL-13 receptor α2, cancer colorectal
Procedia PDF Downloads 1372298 The Re-Emergence of Russia Foreign Policy (Case Study: Middle East)
Authors: Maryam Azish
Abstract:
Russia, as an emerging global player in recent years, has projected a special place in the Middle East. Despite all the challenges it has faced over the years, it has always considered its presence in various fields with a strategy that has defined its maneuvering power as a level of competition and even confrontation with the United States. Therefore, its current approach is considered important as an influential actor in the Middle East. After the collapse of the Soviet Union, when the Russians withdrew completely from the Middle East, the American scene remained almost unrivaled by the Americans. With the start of the US-led war in Iraq and Afghanistan and the subsequent developments that led to the US military and political defeat, a new chapter in regional security was created in which ISIL and Taliban terrorism went along with the Arab Spring to destabilize the Middle East. Because of this, the Americans took every opportunity to strengthen their military presence. Iraq, Syria and Afghanistan have always been the three areas where terrorism was shaped, and the countries of the region have each reacted to this evil phenomenon accordingly. The West dealt with this phenomenon on a case-by-case basis in the general circumstances that created the fluid situation in the Arab countries and the region. Russian President Vladimir Putin accused the US of falling asleep in the face of ISIS and terrorism in Syria. In fact, this was an opportunity for the Russians to revive their presence in Syria. This article suggests that utilizing the recognition policy along with the constructivism theory will offer a better knowledge of Russia’s endeavors to endorse its international position. Accordingly, Russia’s distinctiveness and its ambitions for a situation of great power have played a vital role in shaping national interests and, subsequently, in foreign policy, in Putin's era in particular. The focal claim of the paper is that scrutinize Russia’s foreign policy with realistic methods cannot be attained. Consequently, with an aim to fill the prevailing vacuum, this study exploits the politics of acknowledgment in the context of constructivism to examine Russia’s foreign policy in the Middle East. The results of this paper show that the key aim of Russian foreign policy discourse, accompanied by increasing power and wealth, is to recognize and reinstate the position of great power in the universal system. The Syrian crisis has created an opportunity for Russia to unite its position in the developing global and regional order after ages of dynamic and prevalent existence in the Middle East as well as contradicting US unilateralism. In the meantime, the writer thinks that the question of identifying Russia’s position in the global system by the West has played a foremost role in serving its national interests.Keywords: constructivism, foreign Policy, middle East, Russia, regionalism
Procedia PDF Downloads 1492297 Dosimetric Comparison of Conventional Optimization Methods with Inverse Planning Simulated Annealing Technique
Authors: Shraddha Srivastava, N. K. Painuly, S. P. Mishra, Navin Singh, Muhsin Punchankandy, Kirti Srivastava, M. L. B. Bhatt
Abstract:
Various optimization methods used in interstitial brachytherapy are based on dwell positions and dwell weights alteration to produce dose distribution based on the implant geometry. Since these optimization schemes are not anatomy based, they could lead to deviations from the desired plan. This study was henceforth carried out to compare anatomy-based Inverse Planning Simulated Annealing (IPSA) optimization technique with graphical and geometrical optimization methods in interstitial high dose rate brachytherapy planning of cervical carcinoma. Six patients with 12 CT data sets of MUPIT implants in HDR brachytherapy of cervical cancer were prospectively studied. HR-CTV and organs at risk (OARs) were contoured in Oncentra treatment planning system (TPS) using GYN GEC-ESTRO guidelines on cervical carcinoma. Three sets of plans were generated for each fraction using IPSA, graphical optimization (GrOPT) and geometrical optimization (GOPT) methods. All patients were treated to a dose of 20 Gy in 2 fractions. The main objective was to cover at least 95% of HR-CTV with 100% of the prescribed dose (V100 ≥ 95% of HR-CTV). IPSA, GrOPT, and GOPT based plans were compared in terms of target coverage, OAR doses, homogeneity index (HI) and conformity index (COIN) using dose-volume histogram (DVH). Target volume coverage (mean V100) was found to be 93.980.87%, 91.341.02% and 85.052.84% for IPSA, GrOPT and GOPT plans respectively. Mean D90 (minimum dose received by 90% of HR-CTV) values for IPSA, GrOPT and GOPT plans were 10.19 ± 1.07 Gy, 10.17 ± 0.12 Gy and 7.99 ± 1.0 Gy respectively, while D100 (minimum dose received by 100% volume of HR-CTV) for IPSA, GrOPT and GOPT plans was 6.55 ± 0.85 Gy, 6.55 ± 0.65 Gy, 4.73 ± 0.14 Gy respectively. IPSA plans resulted in lower doses to the bladder (D₂Keywords: cervical cancer, HDR brachytherapy, IPSA, MUPIT
Procedia PDF Downloads 1872296 An Evolutionary Perspective on the Role of Extrinsic Noise in Filtering Transcript Variability in Small RNA Regulation in Bacteria
Authors: Rinat Arbel-Goren, Joel Stavans
Abstract:
Cell-to-cell variations in transcript or protein abundance, called noise, may give rise to phenotypic variability between isogenic cells, enhancing the probability of survival under stress conditions. These variations may be introduced by post-transcriptional regulatory processes such as non-coding, small RNAs stoichiometric degradation of target transcripts in bacteria. We study the iron homeostasis network in Escherichia coli, in which the RyhB small RNA regulates the expression of various targets as a model system. Using fluorescence reporter genes to detect protein levels and single-molecule fluorescence in situ hybridization to monitor transcripts levels in individual cells, allows us to compare noise at both transcript and protein levels. The experimental results and computer simulations show that extrinsic noise buffers through a feed-forward loop configuration the increase in variability introduced at the transcript level by iron deprivation, illuminating the important role that extrinsic noise plays during stress. Surprisingly, extrinsic noise also decouples of fluctuations of two different targets, in spite of RyhB being a common upstream factor degrading both. Thus, phenotypic variability increases under stress conditions by the decoupling of target fluctuations in the same cell rather than by increasing the noise of each. We also present preliminary results on the adaptation of cells to prolonged iron deprivation in order to shed light on the evolutionary role of post-transcriptional downregulation by small RNAs.Keywords: cell-to-cell variability, Escherichia coli, noise, single-molecule fluorescence in situ hybridization (smFISH), transcript
Procedia PDF Downloads 1632295 Development of an Electrochemical Aptasensor for the Detection of Human Osteopontin Protein
Authors: Sofia G. Meirinho, Luis G. Dias, António M. Peres, Lígia R. Rodrigues
Abstract:
The emerging development of electrochemical aptasen sors has enabled the easy and fast detection of protein biomarkers in standard and real samples. Biomarkers are produced by body organs or tumours and provide a measure of antigens on cell surfaces. When detected in high amounts in blood, they can be suggestive of tumour activity. These biomarkers are more often used to evaluate treatment effects or to assess the potential for metastatic disease in patients with established disease. Osteopontin (OPN) is a protein found in all body fluids and constitutes a possible biomarker because its overexpression has been related with breast cancer evolution and metastasis. Currently, biomarkers are commonly used for the development of diagnostic methods, allowing the detection of the disease in its initial stages. A previously described RNA aptamer was used in the current work to develop a simple and sensitive electrochemical aptasensor with high affinity for human OPN. The RNA aptamer was biotinylated and immobilized on a gold electrode by avidin-biotin interaction. The electrochemical signal generated from the aptamer–target molecule interaction was monitored electrochemically using cyclic voltammetry in the presence of [Fe (CN) 6]−3/− as a redox probe. The signal observed showed a current decrease due to the binding of OPN. The preliminary results showed that this aptasensor enables the detection of OPN in standard solutions, showing good selectivity towards the target in the presence of others interfering proteins such as bovine OPN and bovine serum albumin. The results gathered in the current work suggest that the proposed electrochemical aptasensor is a simple and sensitive detection tool for human OPN and so, may have future applications in cancer disease monitoring.Keywords: osteopontin, aptamer, aptasensor, screen-printed electrode, cyclic voltammetry
Procedia PDF Downloads 4312294 The Ability of Consortium Wastewater Protozoan and Bacterial Species to Remove Chemical Oxygen Demand in the Presence of Nanomaterials under Varying pH Conditions
Authors: Anza-Vhudziki Mboyi, Ilunga Kamika, Maggy Momba
Abstract:
The aim of this study was to ascertain the survival limit and capability of commonly found wastewater protozoan (Aspidisca sp, Trachelophyllum sp, and Peranema sp) and bacterial (Bacillus licheniformis, Brevibacillus laterosporus, and Pseudomonas putida) species to remove COD while exposed to commercial nanomaterials under varying pH conditions. The experimental study was carried out in modified mixed liquor media adjusted to various pH levels (pH 2, 7 and 10), and a comparative study was performed to determine the difference between the cytotoxicity effects of commercial zinc oxide (nZnO) and silver (nAg) nanomaterials (NMs) on the target wastewater microbial communities using standard methods. The selected microbial communities were exposed to lethal concentrations ranging from 0.015 g/L to 40 g/L for nZnO and from 0.015 g/L to 2 g/L for nAg for a period of 5 days of incubation at 30°C (100 r/min). Compared with the absence of NMs in wastewater mixed liquor, the relevant environmental concentration ranging between 10 µg/L and 100 µg/L, for both nZnO and nAg caused no adverse effects, but the presence of 20 g of nZnO/L and 0.65 g of nAg/L significantly inhibited microbial growth. Statistical evidence showed that nAg was significantly more toxic compared to nZnO, but there was an insignificant difference in toxicity between microbial communities and pH variations. A significant decrease in the removal of COD by microbial populations was observed in the presence of NMs with a moderate correlation of r = 0.3 to r = 0.7 at all pH levels. It was evident that there was a physical interaction between commercial NMs and target wastewater microbial communities; although not quantitatively assessed, cell morphology and cell death were observed. Such phenomena suggest the high resilience of the microbial community, but it is the accumulation of NMs that will have adverse effects on the performance in terms of COD removal.Keywords: bacteria, biological treatment, chemical oxygen demand (COD) and nanomaterials, consortium, pH, protozoan
Procedia PDF Downloads 3092293 A Systematic Categorization of Arguments against the Vision Zero Goal: A Literature Review
Authors: Henok Girma Abebe
Abstract:
The Vision Zero is a long-term goal of preventing all road traffic fatalities and serious injuries which was first adopted in Sweden in 1997. It is based on the assumption that death and serious injury in the road system is morally unacceptable. In order to approach this end, vision zero has put in place strategies that are radically different from the traditional safety work. The vision zero, for instance, promoted the adoption of the best available technology to promote safety, and placed the ultimate responsibility for traffic safety on system designers. Despite Vision Zero’s moral appeal and its expansion to different safety areas and also parts of the world, important philosophical concerns related to the adoption and implementation of the vision zero remain to be addressed. Moreover, the vision zero goal has been criticized on different grounds. The aim of this paper is to identify and systematically categorize criticisms that have been put forward against vision zero. The findings of the paper are solely based on a critical analysis of secondary sources and snowball method is employed to identify the relevant philosophical and empirical literatures. Two general categories of criticisms on the vision zero goal are identified. The first category consists of criticisms that target the setting of vision zero as a ‘goal’ and some of the basic assumptions upon which the goal is based. Among others, the goal of achieving zero fatalities and serious injuries, together with vision zero’s lexicographical prioritization of safety has been criticized as unrealistic. The second category consists of criticisms that target the strategies put in place to achieve the goal of zero fatalities and serious injuries. For instance, Vision zero’s responsibility ascription for road safety and its rejection of cost-benefit analysis in the formulation and adoption of safety measures has both been criticized as counterproductive. In this category also falls the criticism that Vision Zero safety measures tend to be too paternalistic. Significant improvements have been recorded in road safety work since the adoption of vision zero, however, for the vision zero to even succeed more, it is important that issues and criticisms of philosophical nature associated with it are identified and critically dealt with.Keywords: criticisms, systems approach, traffic safety, vision zero
Procedia PDF Downloads 3012292 Calculation of Secondary Neutron Dose Equivalent in Proton Therapy of Thyroid Gland Using FLUKA Code
Authors: M. R. Akbari, M. Sadeghi, R. Faghihi, M. A. Mosleh-Shirazi, A. R. Khorrami-Moghadam
Abstract:
Proton radiotherapy (PRT) is becoming an established treatment modality for cancer. The localized tumors, the same as undifferentiated thyroid tumors are insufficiently handled by conventional radiotherapy, while protons would propose the prospect of increasing the tumor dose without exceeding the tolerance of the surrounding healthy tissues. In spite of relatively high advantages in giving localized radiation dose to the tumor region, in proton therapy, secondary neutron production can have significant contribution on integral dose and lessen advantages of this modality contrast to conventional radiotherapy techniques. Furthermore, neutrons have high quality factor, therefore, even a small physical dose can cause considerable biological effects. Measuring of this neutron dose is a very critical step in prediction of secondary cancer incidence. It has been found that FLUKA Monte Carlo code simulations have been used to evaluate dose due to secondaries in proton therapy. In this study, first, by validating simulated proton beam range in water phantom with CSDA range from NIST for the studied proton energy range (34-54 MeV), a proton therapy in thyroid gland cancer was simulated using FLUKA code. Secondary neutron dose equivalent of some organs and tissues after the target volume caused by 34 and 54 MeV proton interactions were calculated in order to evaluate secondary cancer incidence. A multilayer cylindrical neck phantom considering all the layers of neck tissues and a proton beam impinging normally on the phantom were also simulated. Trachea (accompanied by Larynx) had the greatest dose equivalent (1.24×10-1 and 1.45 pSv per primary 34 and 54 MeV protons, respectively) among the simulated tissues after the target volume in the neck region.Keywords: FLUKA code, neutron dose equivalent, proton therapy, thyroid gland
Procedia PDF Downloads 4252291 Students’ Speech Anxiety in Blended Learning
Authors: Mary Jane B. Suarez
Abstract:
Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety
Procedia PDF Downloads 1022290 A Corpus-Based Study on the Styles of Three Translators
Authors: Wang Yunhong
Abstract:
The present paper is preoccupied with the different styles of three translators in their translating a Chinese classical novel Shuihu Zhuan. Based on a parallel corpus, it adopts a target-oriented approach to look into whether and what stylistic differences and shifts the three translations have revealed. The findings show that the three translators demonstrate different styles concerning their word choices and sentence preferences, which implies that identification of recurrent textual patterns may be a basic step for investigating the style of a translator.Keywords: corpus, lexical choices, sentence characteristics, style
Procedia PDF Downloads 2682289 Effect of Oxygen Ion Irradiation on the Structural, Spectral and Optical Properties of L-Arginine Acetate Single Crystals
Authors: N. Renuka, R. Ramesh Babu, N. Vijayan
Abstract:
Ion beams play a significant role in the process of tuning the properties of materials. Based on the radiation behavior, the engineering materials are categorized into two different types. The first one comprises organic solids which are sensitive to the energy deposited in their electronic system and the second one comprises metals which are insensitive to the energy deposited in their electronic system. However, exposure to swift heavy ions alters this general behavior. Depending on the mass, kinetic energy and nuclear charge, an ion can produce modifications within a thin surface layer or it can penetrate deeply to produce long and narrow distorted area along its path. When a high energetic ion beam impinges on a material, it causes two different types of changes in the material due to the columbic interaction between the target atom and the energetic ion beam: (i) inelastic collisions of the energetic ion with the atomic electrons of the material; and (ii) elastic scattering from the nuclei of the atoms of the material, which is extremely responsible for relocating the atoms of matter from their lattice position. The exposure of the heavy ions renders the material return to equilibrium state during which the material undergoes surface and bulk modifications which depends on the mass of the projectile ion, physical properties of the target material, its energy, and beam dimension. It is well established that electronic stopping power plays a major role in the defect creation mechanism provided it exceeds a threshold which strongly depends on the nature of the target material. There are reports available on heavy ion irradiation especially on crystalline materials to tune their physical and chemical properties. L-Arginine Acetate [LAA] is a potential semi-organic nonlinear optical crystal and its optical, mechanical and thermal properties have already been reported The main objective of the present work is to enhance or tune the structural and optical properties of LAA single crystals by heavy ion irradiation. In the present study, a potential nonlinear optical single crystal, L-arginine acetate (LAA) was grown by slow evaporation solution growth technique. The grown LAA single crystal was irradiated with oxygen ions at the dose rate of 600 krad and 1M rad in order to tune the structural and optical properties. The structural properties of pristine and oxygen ions irradiated LAA single crystals were studied using Powder X- ray diffraction and Fourier Transform Infrared spectral studies which reveal the structural changes that are generated due to irradiation. Optical behavior of pristine and oxygen ions irradiated crystals is studied by UV-Vis-NIR and photoluminescence analyses. From this investigation we can concluded that oxygen ions irradiation modifies the structural and optical properties of LAA single crystals.Keywords: heavy ion irradiation, NLO single crystal, photoluminescence, X-ray diffractometer
Procedia PDF Downloads 2542288 Practices of Waterwise Circular Economy in Water Protection: A Case Study on Pyhäjärvi, SW Finland
Authors: Jari Koskiaho, Teija Kirkkala, Jani Salminen, Sarianne Tikkanen, Sirkka Tattari
Abstract:
Here, phosphorus (P) loading to the lake Pyhäjärvi (SW Finland) was reviewed, load reduction targets were determined, and different measures of waterwise circular economy to reach the targets were evaluated. In addition to the P loading from the lake’s catchment, there is a significant amount of internal P loading occurring in the lake. There are no point source emissions into the lake. Thus, the most important source of external nutrient loading is agriculture. According to the simulations made with LLR-model, the chemical state of the lake is at the border of the classes ‘Satisfactory’ and ‘Good’. The LLR simulations suggest that a reduction of some hundreds of kilograms in annual P loading would be needed to reach an unquestionably ‘Good’ state. Evaluation of the measures of the waterwise circular economy suggested that they possess great potential in reaching the target P load reduction. If they were applied extensively and in a versatile, targeted manner in the catchment, their combined effect would reach the target reduction. In terms of cost-effectiveness, the waterwise measures were ranked as follows: The best: Fishing, 2nd best: Recycling of vegetation of reed beds, wetlands and buffer zones, 3rd best: Recycling field drainage waters stored in wetlands and ponds for irrigation, 4th best: Controlled drainage and irrigation, and 5th best: Recycling of the sediments of wetlands and ponds for soil enrichment. We also identified various waterwise nutrient recycling measures to decrease the P content of arable land. The cost-effectiveness of such measures may be very good. Solutions are needed to Finnish water protection in general, and particularly for regions like lake Pyhäjärvi catchment with intensive domestic animal production, of which the ‘P-hotspots’ are a crucial issue.Keywords: circular economy, lake protection, mitigation measures, phosphorus
Procedia PDF Downloads 1062287 Understanding the Heart of the Matter: A Pedagogical Framework for Apprehending Successful Second Language Development
Authors: Cinthya Olivares Garita
Abstract:
Untangling language processing in second language development has been either a taken-for-granted and overlooked task for some English language teaching (ELT) instructors or a considerable feat for others. From the most traditional language instruction to the most communicative methodologies, how to assist L2 learners in processing language in the classroom has become a challenging matter in second language teaching. Amidst an ample array of methods, strategies, and techniques to teach a target language, finding a suitable model to lead learners to process, interpret, and negotiate meaning to communicate in a second language has imposed a great responsibility on language teachers; committed teachers are those who are aware of their role in equipping learners with the appropriate tools to communicate in the target language in a 21stcentury society. Unfortunately, one might find some English language teachers convinced that their job is only to lecture students; others are advocates of textbook-based instruction that might hinder second language processing, and just a few might courageously struggle to facilitate second language learning effectively. Grounded on the most representative empirical studies on comprehensible input, processing instruction, and focus on form, this analysis aims to facilitate the understanding of how second language learners process and automatize input and propose a pedagogical framework for the successful development of a second language. In light of this, this paper is structured to tackle noticing and attention and structured input as the heart of processing instruction, comprehensible input as the missing link in second language learning, and form-meaning connections as opposed to traditional grammar approaches to language teaching. The author finishes by suggesting a pedagogical framework involving noticing-attention-comprehensible-input-form (NACIF based on their acronym) to support ELT instructors, teachers, and scholars on the challenging task of facilitating the understanding of effective second language development.Keywords: second language development, pedagogical framework, noticing, attention, comprehensible input, form
Procedia PDF Downloads 282286 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder
Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen
Abstract:
Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.Keywords: natural language inference, explanation generation, variational auto-encoder, generative model
Procedia PDF Downloads 1512285 Experimental and Theoretical Studies: Biochemical Properties of Honey on Type 2 Diabetes
Authors: Said Ghalem
Abstract:
Honey is primarily composed of sugars: glucose and fructose. Depending honey, it's either fructose or glucose predominates. More the fructose concentration and the less the glycemic index (GI) is high. Thus, changes in the insulin response shows a decrease of the amount of insulin secreted at an increased fructose honey. Honey is also a compound that can reduce the lipid in the blood. Several studies on animals, but which remain to be checked in humans, have shown that the honey can have interesting effects when combined with other molecules: associated with Metformin (a medicine taken by diabetics), it shows the benefits and effects of diabetes preserves the tissue; associated ginger, it increases the antioxidant activity and thus avoids neurologic complications, neuropathic. Molecular modeling techniques are widely used in chemistry, biology, and the pharmaceutical industry. Most of the currently existing drugs target enzymes. Inhibition of DPP-4 is an important approach in the treatment of type 2 diabetes. We have chosen for the inhibition of DPP-4 the following molecules: Linagliptin (BI1356), Sitagliptin (Januvia), Vildagliptin, Saxagliptin, Alogliptin, and Metformin (Glucophage), that are involved in the disease management of type 2 diabetes and added to honey. For this, we used software Molecular Operating Environment. A Wistar rat study was initiated in our laboratory with a well-studied protocol; after sacrifice, according to international standards and respect for the animal This theoretical approach predicts the mode of interaction of a ligand with its target. The honey can have interesting effects when combined with other molecules, it shows the benefits and effects of honey preserves the tissue, it increases the antioxidant activity, and thus avoids neurologic complications, neuropathic or macrovascular. The organs, especially the kidneys of Wistar, shows that the parameters to renal function let us conclude that damages caused by diabetes are slightly perceptible than those observed without the addition of a high concentration of fructose honey.Keywords: honey, molecular modeling, DPP4 enzyme, metformin
Procedia PDF Downloads 982284 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory
Authors: Liu Canqi, Zeng Junsheng
Abstract:
This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay
Procedia PDF Downloads 672283 Use of an Insecticidal-Iridovirus Kinase towards the Development of Aphid-Resistant Plants
Authors: Saranya Ganapathy, Megha N. Parajulee, Michael San Francisco, Hong Zhang
Abstract:
Insect pests are a serious threat to agricultural productivity. Use of chemical pesticides, the predominant control method thus far, has resulted in environmental damage, pest resurgence, and negative effects on non-target species. Genetically modified (GM) crops offer a promising alternative, and Bacillus thuringiensis endotoxin genes have played a major role in this respect. However, to overcome insect tolerance issues and to broaden the target range, it is critical to identify alternative-insecticidal toxins working through novel mechanisms. Our research group has identified a kinase from Chilo iridescent virus (CIV; Family Iridoviridae) that has insecticidal activity and designated it as ISTK (Iridovirus Serine/Threonine Kinase). A 35 kDa truncated form of ISTK, designated iridoptin, was obtained during expression and purification of ISTK in the yeast system. This yeast-expressed CIV toxin induced 50% mortality in cotton aphids and 100% mortality in green peach aphids (GPA). Optimized viral genes (o-ISTK and o-IRI) were stably transformed into the model plant, Arabidopsis. PCR analysis of genomic DNA confirmed the presence of the gene insert (oISTK/oIRI) in selected transgenic lines. The further screening was performed to identify the PCR positive lines that showed expression of respective toxins at the polypeptide level using Western blot analysis. The stable lines expressing either of these two toxins induced moderate to very high mortality in GPAs and significantly affected GPA development and fecundity. The aphicidal potential of these transgenic Arabidopsis lines will be presented.Keywords: Chilo iridescent virus, insecticidal toxin, iridoviruses, plant-incorporated protectants, serine/threonine kinase
Procedia PDF Downloads 2862282 Evaluation of Four Different DNA Targets in Polymerase Chain Reaction for Detection and Genotyping of Helicobacter pylori
Authors: Abu Salim Mustafa
Abstract:
Polymerase chain reaction (PCR) assays targeting genomic DNA segments have been established for the detection of Helicobacter pylori in clinical specimens. However, the data on comparative evaluations of various targets in detection of H. pylori are limited. Furthermore, the frequencies of vacA (s1 and s2) and cagA genotypes, which are suggested to be involved in the pathogenesis of H. pylori in other parts of the world, are not well studied in Kuwait. The aim of this study was to evaluate PCR assays for the detection and genotyping of H. pylori by targeting the amplification of DNA targets from four genomic segments. The genomic DNA were isolated from 72 clinical isolates of H. pylori and tested in PCR with four pairs of oligonucleotides primers, i.e. ECH-U/ECH-L, ET-5U/ET-5L, CagAF/CagAR and Vac1F/Vac1XR, which were expected to amplify targets of various sizes (471 bp, 230 bp, 183 bp and 176/203 bp, respectively) from the genomic DNA of H. pylori. The PCR-amplified DNA were analyzed by agarose gel electrophoresis. PCR products of expected size were obtained with all primer pairs by using genomic DNA isolated from H. pylori. DNA dilution experiments showed that the most sensitive PCR target was 471 bp DNA amplified by the primers ECH-U/ECH-L, followed by the targets of Vac1F/Vac1XR (176 bp/203 DNA), CagAF/CagAR (183 bp DNA) and ET-5U/ET-5L (230 bp DNA). However, when tested with undiluted genomic DNA isolated from single colonies of all isolates, the Vac1F/Vac1XR target provided the maximum positive results (71/72 (99% positives)), followed by ECH-U/ECH-L (69/72 (93% positives)), ET-5U/ET-5L (51/72 (71% positives)) and CagAF/CagAR (26/72 (46% positives)). The results of genotyping experiments showed that vacA s1 (46% positive) and vacA s2 (54% positive) genotypes were almost equally associated with VaCA+/CagA- isolates (P > 0.05), but with VacA+/CagA+ isolates, S1 genotype (92% positive) was more frequently detected than S2 genotype (8% positive) (P< 0.0001). In conclusion, among the primer pairs tested, Vac1F/Vac1XR provided the best results for detection of H. pylori. The genotyping experiments showed that vacA s1 and vacA s2 genotypes were almost equally associated with vaCA+/cagA- isolates, but vacA s1 genotype had a significantly increased association with vacA+/cagA+ isolates.Keywords: H. pylori, PCR, detection, genotyping
Procedia PDF Downloads 1332281 Investigating the Flavin-Dependent Thymidylate Synthase (FDTS) Enzyme from Clostridioides Difficile (C. diff)
Authors: Sidra Shaw, Sarenna Shaw, Chae Joon Lee, Irimpan Mathews, Eric Koehn
Abstract:
One of the biggest public health concerns of our time is increasing antimicrobial resistance. As of 2019, the CDC has documented more than 2.8 million serious antibiotic resistant infections in the United States. Currently, antibiotic resistant infections are directly implicated in over 750,000 deaths per year globally. On our current trajectory, British economist Jim O’Neill predicts that by 2050, an additional 10 million people (about half the population of New York) will die annually due to drug resistant infections. As a result, new biochemical pathways must be targeted to generate next generation antibiotic drugs that will be effective against drug resistant bacteria. One enticing target is the biosynthesis of DNA within bacteria, as few drugs interrupt this essential life process. Thymidylate synthase enzymes are essential for life as they catalyze the synthesis of a DNA building block, 2′-deoxythymidine-5′-monophosphate (dTMP). In humans, the thymidylate synthase enzyme (TSase) has been shown to be distinct from the flavin-dependent thymidylate synthase (FDTS) produced by many pathogenic bacteria. TSase and FDTS have distinct structures and mechanisms of catalysis, which should allow selective inhibition of FDTS over human TSase. Currently, C. diff is one of the most antibiotic resistant bacteria, and no drugs that target thymine biosynthesis exist for C. diff. Here we present the initial biochemical characterization of FDTS from C. diff. Specifically, we examine enzyme kinetics and binding features of this enzyme to determine the nature of interaction with ligands/inhibitors and understand the molecular mechanism of catalysis. This research will provide more insight into the targetability of the C. diff FDTS enzyme for novel antibiotic drugs.Keywords: flavin-dependent thymidylate synthase, FDTS, clostridioides difficile, C. diff, antibiotic resistance, DNA synthesis, enzyme kinetics, binding features
Procedia PDF Downloads 1042280 Boundary Feedback Stabilization of an Overhead Crane Model
Authors: Abdelhadi Elharfi
Abstract:
A problem of boundary feedback (exponential) stabilization of an overhead crane model represented by a PDE is considered. For any $r>0$, the exponential stability at the desired decay rate $r$ is solved in semi group setting by a collocated-type stabiliser of a target system combined with a term involving the solution of an appropriate PDE.Keywords: feedback stabilization, semi group and generator, overhead crane system
Procedia PDF Downloads 4052279 Application of a Lighting Design Method Using Mean Room Surface Exitance
Authors: Antonello Durante, James Duff, Kevin Kelly
Abstract:
The visual needs of people in modern work based buildings are changing. Self-illuminated screens of computers, televisions, tablets and smart phones have changed the relationship between people and the lit environment. In the past, lighting design practice was primarily based on providing uniform horizontal illuminance on the working plane, but this has failed to ensure good quality lit environments. Lighting standards of today continue to be set based upon a 100 year old approach that at its core, considers the task illuminance of the utmost importance, with this task typically being located on a horizontal plane. An alternative method focused on appearance has been proposed, as opposed to the traditional performance based approach. Mean Room Surface Exitance (MRSE) and Target-Ambient Illuminance Ratio (TAIR) are two new metrics proposed to assess illumination adequacy in interiors. The hypothesis is that these factors will be superior to the existing metrics used, which are horizontal illuminance led. For the six past years, research has examined this, within the Dublin Institute of Technology, with a view to determining the suitability of this approach for application to general lighting practice. Since the start of this research, a number of key findings have been produced that centered on how occupants will react to various levels of MRSE. This paper provides a broad update on how this research has progressed. More specifically, this paper will: i) Demonstrate how MRSE can be measured using HDR images technology, ii) Illustrate how MRSE can be calculated using scripting and an open source lighting computation engine, iii) Describe experimental results that demonstrate how occupants have reacted to various levels of MRSE within experimental office environments.Keywords: illumination hierarchy (IH), mean room surface exitance (MRSE), perceived adequacy of illumination (PAI), target-ambient illumination ratio (TAIR)
Procedia PDF Downloads 1872278 Mental Health Surveys on Community and Organizational Levels: Challenges, Issues, Conclusions and Possibilities
Authors: László L. Lippai
Abstract:
In addition to the fact that mental health bears great significance to a particular individual, it can also be regarded as an organizational, community and societal resource. Within the Szeged Health Promotion Research Group, we conducted mental health surveys on two levels: The inhabitants of a medium-sized Hungarian town and students of a Hungarian university with a relatively big headcount were requested to participate in surveys whose goals were to define local government priorities and organization-level health promotion programmes, respectively. To facilitate professional decision-making, we defined three, pragmatically relevant, groups of the target population: the mentally healthy, the vulnerable and the endangered. In order to determine which group a person actually belongs to, we designed a simple and quick measurement tool, which could even be utilised as a smoothing method, the Mental State Questionnaire validity of the above three categories was verified by analysis of variance against psychological quality of life variables. We demonstrate the pragmatic significance of our method via the analyses of the scores of our two mental health surveys. On town level, during our representative survey in Hódmezővásárhely (N=1839), we found that 38.7% of the participants was mentally healthy, 35.3% was vulnerable, while 16.3% was considered as endangered. We were able to identify groups that were in a dramatic state in terms of mental health. For example, such a group consisted of men aged 45 to 64 with only primary education qualification and the ratios of the mentally healthy, vulnerable and endangered were 4.5, 45.5 and 50%, respectively. It was also astonishing to see to what a little extent qualification prevailed as a protective factor in the case of women. Based on our data, the female group aged 18 to 44 with primary education—of whom 20.3% was mentally healthy, 42.4% vulnerable and 37.3% was endangered—as well as the female group aged 45 to 64 with university or college degree—of whom 25% was mentally healthy, 51.3 vulnerable and 23.8% endangered—are to be handled as priority intervention target groups in a similarly difficult position. On organizational level, our survey involving the students of the University of Szeged, N=1565, provided data to prepare a strategy of mental health promotion for a university with a headcount exceeding 20,000. When developing an organizational strategy, it was important to gather information to estimate the proportions of target groups in which mental health promotion methods; for example, life management skills development, detection, psychological consultancy, psychotherapy, would be applied. Our scores show that 46.8% of the student participants were mentally healthy, 42.1% were vulnerable and 11.1% were endangered. These data convey relevant information as to the allocation of organizational resources within a university with a considerable headcount. In conclusion, The Mental State Questionnaire, as a valid smoothing method, is adequate to describe a community in a plain and informative way in the terms of mental health. The application of the method can promote the preparation, design and implementation of mental health promotion interventions.Keywords: health promotion, mental health promotion, mental state questionnaire, psychological well-being
Procedia PDF Downloads 2952277 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning
Authors: Jean Berger, Mohamed Barkaoui
Abstract:
Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.Keywords: search path planning, false alarm, search-and-delivery, entropy, genetic algorithm
Procedia PDF Downloads 3602276 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning
Authors: Abdullah Bal
Abstract:
This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification
Procedia PDF Downloads 21