Search results for: functional safety
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5925

Search results for: functional safety

1035 Risk Assessment of Natural Gas Pipelines in Coal Mined Gobs Based on Bow-Tie Model and Cloud Inference

Authors: Xiaobin Liang, Wei Liang, Laibin Zhang, Xiaoyan Guo

Abstract:

Pipelines pass through coal mined gobs inevitably in the mining area, the stability of which has great influence on the safety of pipelines. After extensive literature study and field research, it was found that there are a few risk assessment methods for coal mined gob pipelines, and there is a lack of data on the gob sites. Therefore, the fuzzy comprehensive evaluation method is widely used based on expert opinions. However, the subjective opinions or lack of experience of individual experts may lead to inaccurate evaluation results. Hence the accuracy of the results needs to be further improved. This paper presents a comprehensive approach to achieve this purpose by combining bow-tie model and cloud inference. The specific evaluation process is as follows: First, a bow-tie model composed of a fault tree and an event tree is established to graphically illustrate the probability and consequence indicators of pipeline failure. Second, the interval estimation method can be scored in the form of intervals to improve the accuracy of the results, and the censored mean algorithm is used to remove the maximum and minimum values of the score to improve the stability of the results. The golden section method is used to determine the weight of the indicators and reduce the subjectivity of index weights. Third, the failure probability and failure consequence scores of the pipeline are converted into three numerical features by using cloud inference. The cloud inference can better describe the ambiguity and volatility of the results which can better describe the volatility of the risk level. Finally, the cloud drop graphs of failure probability and failure consequences can be expressed, which intuitively and accurately illustrate the ambiguity and randomness of the results. A case study of a coal mine gob pipeline carrying natural gas has been investigated to validate the utility of the proposed method. The evaluation results of this case show that the probability of failure of the pipeline is very low, the consequences of failure are more serious, which is consistent with the reality.

Keywords: bow-tie model, natural gas pipeline, coal mine gob, cloud inference

Procedia PDF Downloads 223
1034 The Removal of Common Used Pesticides from Wastewater Using Golden Activated Charcoal

Authors: Saad Mohamed Elsaid Onaizah

Abstract:

One of the reasons for the intensive use of pesticides is to protect agricultural crops and orchards from pests or agricultural worms. The period of time that pesticides stay inside the soil is estimated at about (2) to (12) weeks. Perhaps the most important reason that led to groundwater pollution is the easy leakage of these harmful pesticides from the soil into the aquifers. This research aims to find the best ways to use trated activated charcoal with gold nitrate solution; For the purpose of removing the deadly pesticides from the aqueous solution by adsorption phenomenon. The most used pesticides in Egypt were selected, such as Malathion, Methomyl Abamectin and, Thiamethoxam. Activated charcoal doped with gold ions was prepared by applying chemical and thermal treatments to activated charcoal using gold nitrate solution. Adsorption of studied pesticide onto activated carbon /Au was mainly by chemical adsorption forming complex with the gold metal immobilised on activated carbon surfaces. Also, gold atom was considered as a catalyst to cracking the pesticide molecule. Gold activated charcoal is a low cost material due to the use of very low concentrations of gold nitrate solution. its notice the great ability of activated charcoal in removing selected pesticides due to the presence of the positive charge of the gold ion, in addition to other active groups such as functional oxygen and lignin cellulose. The presence of pores of different sizes on the surface of activated charcoal is the driving force for the good adsorption efficiency for the removal of the pesticides under study The surface area of the prepared char as well as the active groups were determined using infrared spectroscopy and scanning electron microscopy. Some factors affecting the ability of activated charcoal were applied in order to reach the highest adsorption capacity of activated charcoal, such as the weight of the charcoal, the concentration of the pesticide solution, the time of the experiment, and the pH. Experiments showed that the maximum limit revealed by the batch adsorption study for the adsorption of selected insecticides was in contact time (80) minutes at pH (7.70). These promising results were confirmed, and by establishing the practical application of the developed system, the effect of various operating factors with equilibrium, kinetic and thermodynamic studies is evident, using the Langmuir application on the effectiveness of the absorbent material with absorption capacities higher than most other adsorbents.

Keywords: waste water, pesticides pollution, adsorption, activated carbon

Procedia PDF Downloads 43
1033 Representations of Race and Social Movement Strategies in the US

Authors: Lee Artz

Abstract:

Based on content analyses of major US media, immediately following the George Floyd killing in May 2020, some mayors and local, state, and national officials offered favorable representations of protests against police violence. As the protest movement grew to historic proportions with 26 million joining actions in large cities and small towns, dominant representations of racism by elected officials and leading media shifted—replacing both the voices and demands of protestors with representations by elected officials. Major media quoted Black mayors and Congressional representatives who emphasized concerns about looting and the disruption of public safety. Media coverage privileged elected officials who criticized movement demands for defunding police and deplored isolated instances of property damaged by protestors. Subsequently, public opinion polls saw an increase in concern for law and order tropes and a decrease in support for protests against police violence. Black Lives Matter and local organizations had no coordinated response and no effective means of communication to counter dominant representations voiced by politicians and globally disseminated by major media. Politician and media-instigated public opinion shifts indicate that social movements need their own means of communication and collective decision-making--both of which were largely missing from Black Lives Matter leaders, leading to disaffection and a political split by more than 20 local affiliates. By itself, social media by myriad individuals and groups had limited purchase as a means for social movement communication and organization. Lacking a collaborative, coordinated strategy, organization, and independent media, the loose network of Black Lives Matter groups was unable to offer more accurate, democratic, and favorable representations of protests and their demands for more justice and equality. The fight for equality was diverted by the fight for representation.

Keywords: black lives matter, public opinion, racism, representations, social movements

Procedia PDF Downloads 161
1032 Photoelectrical Stimulation for Cancer Therapy

Authors: Mohammad M. Aria, Fatma Öz, Yashar Esmaeilian, Marco Carofiglio, Valentina Cauda, Özlem Yalçın

Abstract:

Photoelectrical stimulation of cells with semiconductor organic polymers have been shown promising applications in neuroprosthetics such as retinal prosthesis. Photoelectrical stimulation of the cell membranes can be induced through a photo-electric charge separation mechanism in the semiconductor materials, and it can alter intracellular calcium level through both stimulation of voltage-gated ion channels and increase of intracellular reactive oxygen species (ROS) level. On the other hand, targeting voltage-gated ion channels in cancer cells to induce cell apoptosis through calcium signaling alternation is an effective mechanism which has been explained before. In this regard, remote control of the voltage-gated ion channels aimed to alter intracellular calcium by using photo-active organic polymers can be novel technology in cancer therapy. In this study, we used P (ITO/Indium thin oxide)/P3HT(poly(3-hexylthiophene-2,5-diyl)) and PN (ITO/ZnO/P3HT) photovoltaic junctions to stimulate MDA-MB-231 breast cancer cells. We showed that the photo-stimulation of breast cancer cells through photo capacitive current generated by the photovoltaic junctions are able to excite the cells and alternate intracellular calcium based on the calcium imaging (at 8mW/cm² green light intensity and 10-50 ms light durations), which has been reported already to safety stimulate neurons. The control group did not undergo light treatment and was cultured in T-75 flasks. We detected 20-30% cell death for ITO/P3HT and 51-60% cell death for ITO/ZnO/P3HT samples in the light treated MDA-MB-231 cell group. Western blot analysis demonstrated poly(ADP-ribose) polymerase (PARP) activated cell death in the light treated group. Furthermore, Annexin V and PI fluorescent staining indicated both apoptosis and necrosis in treated cells. In conclusion, our findings revealed that the photoelectrical stimulation of cells (through long time overstimulation) can induce cell death in cancer cells.

Keywords: Ca²⁺ signaling, cancer therapy, electrically excitable cells, photoelectrical stimulation, voltage-gated ion channels

Procedia PDF Downloads 152
1031 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 261
1030 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 214
1029 Ophthalmic Self-Medication Practices and Associated Factors among Adult Ophthalmic Patients

Authors: Sarah Saad Alamer, Shujon Mohammed Alazzam, Amjad Khater Alanazi, Mohamed Ahmed Sankari, Jana Sameer Sendy, Saleh Al-Khaldi, Khaled Allam, Amani Badawi

Abstract:

Background: Self-medication is defined as the selection of medicines by individuals to treat self-diagnosed. There are a lot of concerns about the safety of long-term use of nonprescription ophthalmic drugs, which may lead to a variety of serious ocular complications. Topical steroids can produce severe eye-threatening complications, including the elevation of intraocular pressure (IOP) with possible development of glaucoma and infrequent optic nerve damage. In recent times, many OTC ophthalmic preparations have been possible without a prescription. Objective: In our study, we aimed to determine the prevalence of self-medication ocular topical steroid practice and associated factors among adult ophthalmic patients attending King Saud medical city. Methods: This study was conducted as a cross-sectional study, targeting participants aged 18 years old or above who had used topical steroids eye drops to determine the prevalence of self-medication ocular topical steroid practice and associated factors among adult patients attending ophthalmology clinic in King Saud Medical City (KSMC) in the central region. Results: A total of 308 responses, 92(29.8%) were using ocular topical, 58(18.8%) with prescription, 5(1.6%) without prescription, 29(9.4%) with and without prescription while 216(70.1%) did not use it. The frequency of using ocular topical steroids without a prescription among participants was 11(12%) once and 33 (35%) many times. 26(28.3%) were having complication, mostly 11(12.4%) eye infection, 8(9%) Glaucoma, 6 (6.7%) Cataracts. Reasons for self-medication ocular topical steroid practice among participants were 14 (15.2%) repeated symptoms, 11(15.2%) had heard an advice from a friend, 11 (15.2%) thought they had enough knowledge. Conclusion: Our study reveals that, even though detecting a high level of knowledge and acceptable practices and attitudes among participants, the incidence of self-medication with steroid eye drops was observed. This practice is mainly due to participants having repeated symptoms and thinking they have enough knowledge. Increasing the education level of patients on self-medication steroid eye drops practice and it is associated complications would help reduce the incidence of self-medication steroid eye drops practice.

Keywords: self-medication, ophthalmic medicine, steroid eye drop, over the counter

Procedia PDF Downloads 56
1028 Fluoride Immobilization in Plaster Board Waste: A Safety Measure to Prevent Soil and Water Pollution

Authors: Venkataraman Sivasankar, Kiyoshi Omine, Hideaki Sano

Abstract:

The leaching of fluoride from Plaster Board Waste (PBW) is quite feasible in soil and water environments. The Ministry of Environment, Japan recommended the standard limit of 0.8 mgL⁻¹ or less for fluoride. Although the utilization of PBW as a substitute for cement is rather meritorious, its fluoride leaching behavior deteriorates the quality of soil and water and therefore envisaged as a demerit. In view of this fluoride leaching problem, the present research is focused on immobilizing fluoride in PBW. The immobilization experiments were conducted with four chemical systems operated by DAHP (diammonium hydrogen phosphate) and phosphoric acid carbonization of bamboo mass coupled with certain inorganic reactions using reagents such as calcium hydroxide, sodium hydroxide, and aqueous ammonia. The fluoride immobilization was determined after shaking the reactor contents including the plaster board waste for 24 h at 25˚C. In the DAHP system, the immobilization of fluoride was evident from the leaching of fluoride in the range 0.071-0.12 mgL⁻¹, 0.026-0.14 mgL⁻¹ and 0.068-0.12 mgL⁻¹ for the reaction temperatures at 30˚C, 50˚C, and 90˚C, respectively, with final pH of 6.8. The other chemical systems designated as PACCa, PACAm, and PACNa could immobilize fluoride in PBW, and the resulting solution was analyzed with the fluoride less than the Japanese environmental standard of 0.8 mgL⁻¹. In the case of PACAm and PACCa systems, the calcium concentration was found undetectable and witnessed the formation of phosphate compounds. The immobilization of fluoride was found inversely proportional to the increase in the volume of leaching solvent and dose of PBW. Characterization studies of PBW and the solid after fluoride immobilization was done using FTIR (Fourier transform infrared spectroscopy), Raman spectroscopy, FE-SEM ( Field Emission Scanning Electron Microscopy) with EDAX (Energy Dispersive Spectroscopy), XRD (X-ray diffraction), and XPS (X-ray photoelectron spectroscopy). The results revealed the formation of new calcium phosphate compounds such as apatite, monetite, and hydroxylapatite. The participation of such new compounds in fluoride immobilization seems indispensable through the exchange mechanism of hydroxyl and fluoride groups. Acknowledgment: First author thanks to Japanese Society for the Promotion of Science (JSPS) for the award of the fellowship (ID No. 16544).

Keywords: characterization, fluoride, immobilization, plaster board waste

Procedia PDF Downloads 136
1027 Effect of Lowering the Proportion of Chlorella vulgaris in Fish Feed on Tilapia's Immune System

Authors: Hamza A. Pantami, Khozizah Shaari, Intan S. Ismail, Chong C. Min

Abstract:

Introduction: Tilapia is the second-highest harvested freshwater fish species in Malaysia, available in almost all fish farms and markets. Unfortunately, tilapia culture in Malaysia is highly affected by Aeromonas hydrophila and Streptococcus agalactiae, which affect the production rate and consequently pose a direct negative economic impact. Reliance on drugs to control or reduce bacterial infections has been led to contamination of water bodies and development of drug resistance, as well as gave rise to toxicity issues in downstream fish products. Resorting to vaccines have helped curb the problem to a certain extent, but a more effective solution is still required. Using microalgae-based feed to enhance the fish immunity against bacterial infection offers a promising alternative. Objectives: This study aims to evaluate the efficacy of Chlorella vulgaris at lower percentage incorporation in feeds for an immune boost of tilapia in a shorter time. Methods: The study was in two phases. The safety concentration studies at 500 mg/kg-1 and the administration of cultured C. vulgaris biomass via incorporation into fish feed for five different groups in three weeks. Group 1 was the control (0% incorporation), whereas group 2, 3, 4 and 5 received 0.625%, 1.25%, 2.5% and 5% incorporation respectively. The parameters evaluated were the blood profile, serum lysozyme activity (SLA), serum bactericidal activity (SBA), phagocytosis activity (PA), respiratory burst activity (RBA), and lymphoproliferation activity (LPA). The data were analyzed via ANOVA using SPSS (version 16). Further testing was done using Tukey’s test. All tests were performed at the 95% confidence interval (p < 0.05). Results: There were no toxic signs in tilapia fish at 500 mg/kg-1. Treated groups showed significantly better immune parameters compared to the control group (p < 0.05). Conclusions: C. vulgaris crude biomass in a fish meal at a lower incorporation level of 5% can increase specific and non-specific immunity in tilapia fish in a shorter time duration.

Keywords: Chlorella vulgaris, hematology profile, immune boost, lymphoproliferation

Procedia PDF Downloads 81
1026 Sensitivity to Misusing Verb Inflections in Both Finite and Non-Finite Clauses in Native and Non-Native Russian: A Self-Paced Reading Investigation

Authors: Yang Cao

Abstract:

Analyzing the oral production of Chinese-speaking learners of English as a second language (L2), we can find a large variety of verb inflections – Why does it seem so hard for them to use consistent correct past morphologies in obligatory past contexts? Failed Functional Features Hypothesis (FFFH) attributes the rather non-target-like performance to the absence of [±past] feature in their L1 Chinese, arguing that for post puberty learners, new features in L2 are no more accessible. By contrast, Missing Surface Inflection Hypothesis (MSIH) tends to believe that all features are actually acquirable for late L2 learners, while due to the mapping difficulties from features to forms, it is hard for them to realize the consistent past morphologies on the surface. However, most of the studies are limited to the verb morphologies in finite clauses and few studies have ever attempted to figure out these learners’ performance in non-finite clauses. Additionally, it has been discussed that Chinese learners may be able to tell the finite/infinite distinction (i.e. the [±finite] feature might be selected in Chinese, even though the existence of [±past] is denied). Therefore, adopting a self-paced reading task (SPR), the current study aims to analyze the processing patterns of Chinese-speaking learners of L2 Russian, in order to find out if they are sensitive to misuse of tense morphologies in both finite and non-finite clauses and whether they are sensitive to the finite/infinite distinction presented in Russian. The study targets L2 Russian due to its systematic morphologies in both present and past tenses. A native Russian group, as well as a group of English-speaking learners of Russian, whose L1 has definitely selected both [±finite] and [±past] features, will also be involved. By comparing and contrasting performance of the three language groups, the study is going to further examine and discuss the two theories, FFFH and MSIH. Preliminary hypotheses are: a) Russian native speakers are expected to spend longer time reading the verb forms which violate the grammar; b) it is expected that Chinese participants are, at least, sensitive to the misuse of inflected verbs in non-finite clauses, although no sensitivity to the misuse of infinitives in finite clauses might be found. Therefore, an interaction of finite and grammaticality is expected to be found, which indicate that these learners are able to tell the finite/infinite distinction; and c) having selected [±finite] and [±past], English-speaking learners of Russian are expected to behave target-likely, supporting L1 transfer.

Keywords: features, finite clauses, morphosyntax, non-finite clauses, past morphologies, present morphologies, Second Language Acquisition, self-paced reading task, verb inflections

Procedia PDF Downloads 81
1025 Evaluation of the Trauma System in a District Hospital Setting in Ireland

Authors: Ahmeda Ali, Mary Codd, Susan Brundage

Abstract:

Importance: This research focuses on devising and improving Health Service Executive (HSE) policy and legislation and therefore improving patient trauma care and outcomes in Ireland. Objectives: The study measures components of the Trauma System in the district hospital setting of the Cavan/Monaghan Hospital Group (CMHG), HSE, Ireland, and uses the collected data to identify the strengths and weaknesses of the CMHG Trauma System organisation, to include governance, injury data, prevention and quality improvement, scene care and facility-based care, and rehabilitation. The information will be made available to local policy makers to provide objective situational analysis to assist in future trauma service planning and service provision. Design, setting and participants: From 28 April to May 28, 2016 a cross-sectional survey using World Health Organisation (WHO) Trauma System Assessment Tool (TSAT) was conducted among healthcare professionals directly involved in the level III trauma system of CMHG. Main outcomes: Identification of the strengths and weaknesses of the Trauma System of CMHG. Results: The participants who reported inadequate funding for pre hospital (62.3%) and facility based trauma care at CMHG (52.5%) were high. Thirty four (55.7%) respondents reported that a national trauma registry (TARN) exists but electronic health records are still not used in trauma care. Twenty one respondents (34.4%) reported that there are system wide protocols for determining patient destination and adequate, comprehensive legislation governing the use of ambulances was enforced, however, there is a lack of a reliable advisory service. Over 40% of the respondents reported uncertainty of the injury prevention programmes available in Ireland; as well as the allocated government funding for injury and violence prevention. Conclusions: The results of this study contributed to a comprehensive assessment of the trauma system organisation. The major findings of the study identified three fundamental areas: the inadequate funding at CMHG, the QI techniques and corrective strategies used, and the unfamiliarity of existing prevention strategies. The findings direct the need for further research to guide future development of the trauma system at CMHG (and in Ireland as a whole) in order to maximise best practice and to improve functional and life outcomes.

Keywords: trauma, education, management, system

Procedia PDF Downloads 227
1024 Statistical Characteristics of Code Formula for Design of Concrete Structures

Authors: Inyeol Paik, Ah-Ryang Kim

Abstract:

In this research, a statistical analysis is carried out to examine the statistical properties of the formula given in the design code for concrete structures. The design formulas of the Korea highway bridge design code - the limit state design method (KHBDC) which is the current national bridge design code and the design code for concrete structures by Korea Concrete Institute (KCI) are applied for the analysis. The safety levels provided by the strength formulas of the design codes are defined based on the probabilistic and statistical theory.KHBDC is a reliability-based design code. The load and resistance factors of this code were calibrated to attain the target reliability index. It is essential to define the statistical properties for the design formulas in this calibration process. In general, the statistical characteristics of a member strength are due to the following three factors. The first is due to the difference between the material strength of the actual construction and that used in the design calculation. The second is the difference between the actual dimensions of the constructed sections and those used in design calculation. The third is the difference between the strength of the actual member and the formula simplified for the design calculation. In this paper, the statistical study is focused on the third difference. The formulas for calculating the shear strength of concrete members are presented in different ways in KHBDC and KCI. In this study, the statistical properties of design formulas were obtained through comparison with the database which comprises the experimental results from the reference publications. The test specimen was either reinforced with the shear stirrup or not. For an applied database, the bias factor was about 1.12 and the coefficient of variation was about 0.18. By applying the statistical properties of the design formula to the reliability analysis, it is shown that the resistance factors of the current design codes satisfy the target reliability indexes of both codes. Also, the minimum resistance factors of the KHBDC which is written in the material resistance factor format and KCE which is in the member resistance format are obtained and the results are presented. A further research is underway to calibrate the resistance factors of the high strength and high-performance concrete design guide.

Keywords: concrete design code, reliability analysis, resistance factor, shear strength, statistical property

Procedia PDF Downloads 290
1023 Omalizumab Therapy Experience for Asthma, at Zayed Military Hospital (ZMH) in United Arab Emirates

Authors: Shanza Akram, Samir Salah, Imran Saleem, Ashraf Alzaabi, Jassim Abdou

Abstract:

Introduction: 300 million people worldwide are affected by asthma .In UAE, prevalence is around 10% (900,000 people).Patients with persistent symptoms despite using high dose ICS plus a second controller +/- OCS are considered to have severe asthma. Omalizumab (Xolaire) an IgE monoclonal antibody is approved as add on therapy for severe allergic asthma. Objective: To determine the efficacy of omalizumab based on clinical outcomes in our cohort of patient pre and post 52 weeks of treatment to assess safety and tolerability of treatment. Methods: Medical records of patients receiving omalizumab therapy for asthma at ZMH ,Abu Dhabi were retrospectively analyzed.Patients fulfilling the criteria of severe allergic asthma as per GINA guidelines were included. Asthma control over 12 months prior to and 12 months after commencement of omalizumab therapy was analysed by taking into account the number of exacerbations and hospitalizations in addition to maintenance of medication dosages, need for rescue reliever therapy and pulmonary function testing. Results: Total cohort of 21 patient (5 females), average age 41 years and av length of therapy 22 months were included. Seven patients (total 11/52%) managed to stop steroids on treatment while four were able to decrease the dosage. Mean exacerbation rate decreased from five/ year pre treatment to 1.36 while on treatment. Number of hospitalizations decreased from mean of two per year to 0.9 per year. Rescue reliever inhaler usage decreased from mean of 40 puffs to 15 puffs per week. 2 patients discontinued therapy, 1 due to lack of benefit (2 doses) and 2nd due to severe persistent side effects including local irritation, severe limb and joint pains after 6 months. Conclusion: Treatment with omalizumab showed effect in terms of reduced number of exacerbations, maintenance therapy and reliever medications. However, no improvement was seen in PFTs.There is room for improved documentation in terms of symptom recording and use of rescue medicationas as well as for better patient education and counselling in order to improve compliance.

Keywords: asthma, omalizumab, severe allergic asthma, UAE

Procedia PDF Downloads 274
1022 Resilience and Urban Transformation: A Review of Recent Interventions in Europe and Turkey

Authors: Bilge Ozel

Abstract:

Cities are high-complex living organisms and are subjects to continuous transformations produced by the stress that derives from changing conditions. Today the metropolises are seen like “development engines” of the countries and accordingly they become the centre of better living conditions that encourages demographic growth which constitutes the main reason of the changes. Indeed, the potential for economic advancement of the cities directly represents the economic status of their countries. The term of “resilience”, which sees the changes as natural processes and represents the flexibility and adaptability of the systems in the face of changing conditions, becomes a key concept for the development of urban transformation policies. The term of “resilience” derives from the Latin word ‘resilire’, which means ‘bounce’, ‘jump back’, refers to the ability of a system to withstand shocks and still maintain the basic characteristics. A resilient system does not only survive the potential risks and threats but also takes advantage of the positive outcomes of the perturbations and ensures adaptation to the new external conditions. When this understanding is taken into the urban context - or rather “urban resilience” - it delineates the capacity of cities to anticipate upcoming shocks and changes without undergoing major alterations in its functional, physical, socio-economic systems. Undoubtedly, the issue of coordinating the urban systems in a “resilient” form is a multidisciplinary and complex process as the cities are multi-layered and dynamic structures. The concept of “urban transformation” is first launched in Europe just after World War II. It has been applied through different methods such as renovation, revitalization, improvement and gentrification. These methods have been in continuous advancement by acquiring new meanings and trends over years. With the effects of neoliberal policies in the 1980s, the concept of urban transformation has been associated with economic objectives. Subsequently this understanding has been improved over time and had new orientations such as providing more social justice and environmental sustainability. The aim of this research is to identify the most applied urban transformation methods in Turkey and its main reasons of being selected. Moreover, investigating the lacking and limiting points of the urban transformation policies in the context of “urban resilience” in a comparative way with European interventions. The emblematic examples, which symbolize the breaking points of the recent evolution of urban transformation concepts in Europe and Turkey, are chosen and reviewed in a critical way.

Keywords: resilience, urban dynamics, urban resilience, urban transformation

Procedia PDF Downloads 244
1021 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring

Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield

Abstract:

Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.

Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2

Procedia PDF Downloads 198
1020 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 99
1019 Study of Oxidative Processes in Blood Serum in Patients with Arterial Hypertension

Authors: Laura M. Hovsepyan, Gayane S. Ghazaryan, Hasmik V. Zanginyan

Abstract:

Hypertension (HD) is the most common cardiovascular pathology that causes disability and mortality in the working population. Most often, heart failure (HF), which is based on myocardial remodeling, leads to death in hypertension. Recently, endothelial dysfunction (EDF) or a violation of the functional state of the vascular endothelium has been assigned a significant role in the structural changes in the myocardium and the occurrence of heart failure in patients with hypertension. It has now been established that tissues affected by inflammation form increased amounts of superoxide radical and NO, which play a significant role in the development and pathogenesis of various pathologies. They mediate inflammation, modify proteins and damage nucleic acids. The aim of this work was to study the processes of oxidative modification of proteins (OMP) and the production of nitric oxide in hypertension. In the experimental work, the blood of 30 donors and 33 patients with hypertension was used. For the quantitative determination of OMP products, the based on the reaction of the interaction of oxidized amino acid residues of proteins and 2,4-dinitrophenylhydrazine (DNPH) with the formation of 2,4-dinitrophenylhydrazones, the amount of which was determined spectrophotometrically. The optical density of the formed carbonyl derivatives of dinitrophenylhydrazones was recorded at different wavelengths: 356 nm - aliphatic ketone dinitrophenylhydrazones (KDNPH) of neutral character; 370 nm - aliphatic aldehyde dinirophenylhydrazones (ADNPH) of neutral character; 430 nm - aliphatic KDNFG of the main character; 530 nm - basic aliphatic ADNPH. Nitric oxide was determined by photometry using Grace's solution. Adsorption was measured on a Thermo Scientific Evolution 201 SF at a wavelength of 546 nm. Thus, the results of the studies showed that in patients with arterial hypertension, an increased level of nitric oxide in the blood serum is observed, and there is also a tendency to an increase in the intensity of oxidative modification of proteins at a wavelength of 270 nm and 363 nm, which indicates a statistically significant increase in aliphatic aldehyde and ketone dinitrophenylhydrazones. The increase in the intensity of oxidative modification of blood plasma proteins in the studied patients, revealed by us, actually reflects the general direction of free radical processes and, in particular, the oxidation of proteins throughout the body. A decrease in the activity of the antioxidant system also leads to a violation of protein metabolism. The most important consequence of the oxidative modification of proteins is the inactivation of enzymes.

Keywords: hypertension (HD), oxidative modification of proteins (OMP), nitric oxide (NO), oxidative stress

Procedia PDF Downloads 67
1018 Disabilities in Railways: Proposed Changes to the Design of Railway Compartments for the Inclusion of Differently Abled Persons

Authors: Bathmajaa Muralisankar

Abstract:

As much as railway station infrastructure designs and ticket-booking norms have been changed to facilitate use by differently abled persons, the railway train compartments themselves have not been made user-friendly for differently abled persons. Owing to safety concerns, dependency on others for their travel, and fear of isolation, differently abled people do not prefer travelling by train. Rather than including a dedicated compartment open only to the differently abled, including the latter with others in the normal compartment (with the proposed modifications discussed here) will make them feel secure and make for an enhanced travel experience for them. This approach also represents the most practical way to include a particular category of people in the mainstream society. Lowering the height of the compartment doors and providing a wider entrance with a ramp will provide easy entry for those using wheelchairs. As well, removing the first two alternate rows and the first two side seats will not only widen the passage and increase seating space but also improve wheelchair turning radius. This will help them travel without having to depend on others. Seating arrangements may be done to accommodate their family members near them instead of isolating the differently abled in a separate compartment. According to present ticket-booking regulations of the Indian Railways, three to four disabled persons may travel without their family or one to two along with their family, and the numbers may be added or reduced. To help visually challenged and hearing-impaired persons, in addition to the provision of special instruments, railings, and textured footpaths and flooring, the seat numbers above the seats may be set in metal or plastic as an outward projection so the visually impaired can touch and feel the numbers. Braille boards may be included at the entrance to the compartment along with seat numbers in the aforementioned projected manner. These seat numbers may be designed as buttons, which when pressed results in an announcement of the seat number in the applicable local language as well as English. Emergency buttons, rather than emergency chains, within the easy reach of disabled passengers will also help them.

Keywords: dependency, differently abled, inclusion, mainstream society

Procedia PDF Downloads 235
1017 Effect of Three Desensitizers on Dentinal Tubule Occlusion and Bond Strength of Dentin Adhesives

Authors: Zou Xuan, Liu Hongchen

Abstract:

The ideal dentin desensitizing agent should not only have good biological safety, simple clinical operation mode, the superior treatment effect, but also should have a durable effect to resist the oral environmental temperature change and oral mechanical abrasion, so as to achieve a persistent desensitization effect. Also, when using desensitizing agent to prevent the post-operative hypersensitivity, we should not only prevent it from affecting crowns’ retention, but must understand its effects on bond strength of dentin adhesives. There are various of desensitizers and dentin adhesives in clinical treatment. They have different chemical or physical properties. Whether the use of desensitizing agent would affect the bond strength of dentin adhesives still need further research. In this in vitro study, we built the hypersensitive dentin model and post-operative dentin model, to evaluate the sealing effects and durability on exposed tubule by three different dentin desensitizers and to evaluate the sealing effects and the bond strength of dentin adhesives after using three different dentin desensitizers on post-operative dentin. The result of this study could provide some important references for clinical use of dentin desensitizing agent. 1. As to the three desensitizers, the hypersensitive dentin model was built to evaluate their sealing effects on exposed tubule by SEM observation and dentin permeability analysis. All of them could significantly reduce the dentin permeability. 2. Test specimens of three groups treated by desensitizers were subjected to aging treatment with 5000 times thermal cycling and toothbrush abrasion, and then dentin permeability was measured to evaluate the sealing durability of these three desensitizers on exposed tubule. The sealing durability of three groups were different. 3. The post-operative dentin model was built to evaluate the sealing effects of the three desensitizers on post-operative dentin by SEM and methylene blue. All of three desensitizers could reduce the dentin permeability significantly. 4. The influences of three desensitizers on the bonding efficiency of total-etch and self-etch adhesives were evaluated with the micro-tensile bond strength study and bond interface morphology observation. The dentin bond strength for Green or group was significantly lower than the other two groups (P<0.05).

Keywords: dentin, desensitizer, dentin permeability, thermal cycling, micro-tensile bond strength

Procedia PDF Downloads 371
1016 Internet Health: A Cross-Sectional Survey Exploring Identified Risks and Online Safety Measures in Parent and Children with Neurodevelopmental Disorders

Authors: Abdirahim Mohamed, Sarita Rana Chhetri, Michael Sleath, Nadia Saleem

Abstract:

Rationale: Internet usage has been very much integrated into our daily lives. Internet usage within a neurodevelopmental disorder population is also on the increase. Nevertheless, there is very little empirical research on how this population virtually protect themselves; along with how their parents can keep them safe online. This topic was an ever-growing concern to the parents within our services and in many cases would add to the stresses and mental health of parents. This ignited an idea within our team to conduct research to explore the perceived online risks within this population and how they keep themselves safe. In conjunction, we also explored how parents and caregivers monitor and safeguard their young people to the potential threats online. Our hypothesis was that the perceived risks will heavily outnumber the safeguarding measures implemented by this population. Method: Within the Coventry and Warwickshire NHS Partnership Trust Child and Adolescent Mental Health Service (CAMHS), we distributed qualitative questionnaires to all the clinical bases (N=80). Questions explored topics such as daily internet usage, safeguarding measures, and perceived threats. The researchers requested for all CAMHS clinicians to identify participants. Participants in this study were accessing CAMHS for neurodevelopmental specific interventions. Results: The data were analysed using both Excel and SPSS. Within SPSS, a MANOVA was conducted and found a significant difference between safeguarding measures and perceived online risks within responses (p ≤ 0.5). This supports our hypothesis that participants in this population are well versed in the safeguarding issues of the internet; however, struggle to implement appropriate preventative measures. Data were also screened using Excel and found that all parents and carers stated they 'monitored their child’s internet use'. Conclusion: Data suggest that parents/carers may require more specific intervention to equip them with preventative measures due to the clear discrepancy between perceived risks and safeguarding measures. More research may also need to be conducted around this area to determine appropriate methodology to explore this topic further.

Keywords: Internet, health , how safe are we , internet health check

Procedia PDF Downloads 234
1015 Effects of Soaking of Maize on the Viscosity of Masa and Tortilla Physical Properties at Different Nixtamalization Times

Authors: Jorge Martínez-Rodríguez, Esther Pérez-Carrillo, Diana Laura Anchondo Álvarez, Julia Lucía Leal Villarreal, Mariana Juárez Dominguez, Luisa Fernanda Torres Hernández, Daniela Salinas Morales, Erick Heredia-Olea

Abstract:

Maize tortillas are a staple food in Mexico which are mostly made by nixtamalization, which includes the cooking and steeping of maize kernels in alkaline conditions. The cooking step in nixtamalization demands a lot of energy and also generates nejayote, a water pollutant, at the end of the process. The aim of this study was to reduce the cooking time by adding a maize soaking step before nixtamalization while maintaining the quality properties of masa and tortillas. Maize kernels were soaked for 36 h to increase moisture up to 36%. Then, the effect of different cooking times (0, 5, 10, 15, 20, 20, 25, 30, 35, 45-control and 50 minutes) was evaluated on viscosity profile (RVA) of masa to select the treatments with a profile similar or equal to control. All treatments were left steeping overnight and had the same milling conditions. Treatments selected were 20- and 25-min cooking times which had similar values for pasting temperature (79.23°C and 80.23°C), Maximum Viscosity (105.88 Cp and 96.25 Cp) and Final Viscosity (188.5 Cp and 174 Cp) to those of 45 min-control (77.65 °C, 110.08 Cp, and 186.70 Cp, respectively). Afterward, tortillas were produced with the chosen treatments (20 and 25 min) and for control, then were analyzed for texture, damage starch, colorimetry, thickness, and average diameter. Colorimetric analysis of tortillas only showed significant differences for yellow/blue coordinates (b* parameter) at 20 min (0.885), unlike the 25-minute treatment (1.122). Luminosity (L*) and red/green coordinates (a*) showed no significant differences from treatments with respect control (69.912 and 1.072, respectively); however, 25 minutes was closer in both parameters (73.390 and 1.122) than 20 minutes (74.08 and 0.884). For the color difference, (E), the 25 min value (3.84) was the most similar to the control. However, for tortilla thickness and diameter, the 20-minute with 1.57 mm and 13.12 cm respectively was closer to those of the control (1.69 mm and 13.86 cm) although smaller to it. On the other hand, the 25 min treatment tortilla was smaller than both 20 min and control with 1.51 mm thickness and 13.590 cm diameter. According to texture analyses, there was no difference in terms of stretchability (8.803-10.308 gf) and distance for the break (95.70-126.46 mm) among all treatments. However, for the breaking point, all treatments (317.1 gf and 276.5 gf for 25 and 20- min treatment, respectively) were significantly different from the control tortilla (392.2 gf). Results suggest that by adding a soaking step and reducing cooking time by 25 minutes, masa and tortillas obtained had similar functional and textural properties to the traditional nixtamalization process.

Keywords: tortilla, nixtamalization, corn, lime cooking, RVA, colorimetry, texture, masa rheology

Procedia PDF Downloads 137
1014 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites

Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze

Abstract:

Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.

Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering

Procedia PDF Downloads 345
1013 Evolution of Rock-Cut Caves of Dhamnar at Dhamnar, MP

Authors: Abhishek Ranka

Abstract:

Rock-cut Architecture is a manifestation of human endurance in constructing magnificent structures by sculpting and cutting entire hills. Cave Architecture in India form an important part of rock-cut development and is among the most prolific examples of rock-cut architecture in the world. There are more than 1500 rock-cut caves in various regions of India. Among them mostly are located in western India, more particularly in the state of Maharashtra. Some of the rock-cut caves are located in the central region of India, which is presently known as Malawa (Madhya Pradesh). The region is dominated by the vidhyachal hill ranges toward the west, dotted with the coarse laterite rock. Dhamnar Caves have been excavated in the central region of Mandsaur Dist. With a combination of shared sacred faiths. The earliest rock-cut activity began in the north, in Bihar, where caves were excavated in the Barabar and the Nagarjuni hills during the Mauryan period (3rd century BCE). The rock-cut activity then shifts to the central part of India in Madhya Pradesh, where the caves at Dhamnar, Bagh, Udayagiri, Poldungar, etc. excavated between 3rdto 9ᵗʰ CE. The rock-cut excavation continued to flourish in Madhya Pradesh till 10ᵗʰ century CE, simultaneously with monolithic Hindu temples. Dhamnar caves fall into four architectural typologies: the Lena caves, Chaitya caves, Viharas & Lena-Chaityagriha caves. The Buddhist rock-cutting activity in central India is divisible into two phases. In the first phase (2ndBCE-3rd CE), the Buddha image is conspicuously absent. After a lapse of about three centuries, activity begins again, and the Buddha images this time are carved. The former group belongs to the Hinayana (Lesser Vehicle) phase and the latter to the Mahayana (Greater Vehicle). Dhamnar caves has an elaborate facades, pillar capitals, and many more creative sculptures in various postures. These caves were excavated against the background of invigorating trade activities and varied socio-religious or Socio Cultural contexts. These caves also highlights the wealthy and varied patronage provided by the dynasties of the past. This paper speaks about the appraisal of the rock cut mechanisms, design strategies, and approaches while promoting a scope for further research in conservation practices. Rock-cut sites, with their physical setting and various functional spaces as a sustainable habitat for centuries, has a heritage footprint with a researchquotient.

Keywords: rock-cut architecture, buddhism, hinduism, Iconography, and architectural typologies, Jainism

Procedia PDF Downloads 116
1012 Assessment of Attractency of Bactrocera Zonata and Bactrocera dorsalis (Diptera:Tephritidae) to Different Biolure Phagostimulant-Mixtures

Authors: Muhammad Dildar Gogi, Muhammad Jalal Arif, Muhammad Junaid Nisar, Mubashir Iqbal, Waleed Afzal Naveed, Muhammad Ahsan Khan, Ahmad Nawaz, Muhammad Sufian, Muhammad Arshad, Amna Jalal

Abstract:

Fruit flies of Bactrocera genus cause heavy losses in fruits and vegetables globally and insecticide-application for their control creates issues of ecological backlash, environmental pollution, and food safety. There is need to explore alternatives and food-baits application is considered safe for the environment and effective for fruit fly management. Present experiment was carried out to assess the attractancy of five phagostimulant-Mixtures (PHS-Mix) prepared by mixing banana-squash, mulberry, protein-hydrolysate and molasses with some phagostimulant-lure sources including beef extract, fish extract, yeast, starch, rose oil, casein and cedar oil in five different ratios i.e., PHS-Mix-1 (1 part of all ingredients), PHS-Mix-2 (1 part of banana with 0.75 parts of all other ingredients), PHS-Mix-3 (1 part of banana with 0.5 parts of all other ingredients), PHS-Mix-4 (1 part of banana with 0.25 parts of all other ingredients) and PHS-Mix-5 (1 part of banana with 0.125 parts of all other ingredients). These were evaluated in comparison with a standard (GF-120). PHS-Mix-4 demonstrated 40.5±1.3-46.2±1.6% AI for satiated flies (class-II i.e., moderately attractive) and 59.5±2.0-68.6±3.0% AI for starved flies (class-III i.e., highly attractive) for both B. dorsalis and B. zonata in olfactometric study while the same exhibited 51.2±0.53% AI (class-III i.e., highly attractive) for B. zonata and 45.4±0.89% AI (class-II i.e., moderately attractive) for B. dorsalis in field study. PHS-Mix-1 proved non-attractive (class-I) and moderately attractive (class-II) phagostimulant in olfactometer and field studies, respectively. PHS-Mix-2 exhibited moderate attractiveness for starved lots in olfactometer and field-lot in field studies. PHS-Mix-5 proved non-attractive to starved and satiated lots of B. zonata and B. dorsalis females in olfactometer and field studies. Overall PHS-Mix-4 proved better phagostimulant-mixture followed by PHS-Mix-3 which was categorized as class-II (moderately attractive) phagostimulant for starved and satiated lots of female flies of both species in olfactometer and field studies; hence these can be exploited for fruit fly management.

Keywords: attractive index, field conditions, olfactometer, Tephritid flies

Procedia PDF Downloads 224
1011 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 46
1010 Community Observatory for Territorial Information Control and Management

Authors: A. Olivi, P. Reyes Cabrera

Abstract:

Ageing and urbanization are two of the main trends that characterize the twenty-first century. Its trending is especially accelerated in the emerging countries of Asia and Latin America. Chile is one of the countries in the Latin American region, where the demographic transition to ageing is becoming increasingly visible. The challenges that the new demographic scenario poses to urban administrators call for searching innovative solutions to maximize the functional and psycho-social benefits derived from the relationship between older people and the environment in which they live. Although mobility is central to people's everyday practices and social relationships, it is not distributed equitably. On the contrary, it can be considered another factor of inequality in our cities. Older people are a particularly sensitive and vulnerable group to mobility. In this context, based on the ageing in place strategy and following the social innovation approach within a spatial context, the "Community Observatory of Territorial Information Control and Management" project aims at the collective search and validation of solutions for the satisfaction of mobility and accessibility specific needs of urban aged people. Specifically, the Observatory intends to: i) promote the direct participation of the aged population in order to generate relevant information on the territorial situation and the satisfaction of the mobility needs of this group; ii) co-create dynamic and efficient mechanisms for the reporting and updating of territorial information; iii) increase the capacity of the local administration to plan and manage solutions to environmental problems at the neighborhood scale. Based on a participatory mapping methodology and on the application of digital technology, the Observatory designed and developed, together with aged people, a crowdsourcing platform for smartphones, called DIMEapp, for reporting environmental problems affecting mobility and accessibility. DIMEapp has been tested at a prototype level in two neighborhoods of the city of Valparaiso. The results achieved in the testing phase have shown high potential in order to i) contribute to establishing coordination mechanisms with the local government and the local community; ii) improve a local governance system that guides and regulates the allocation of goods and services destined to solve those problems.

Keywords: accessibility, ageing, city, digital technology, local governance

Procedia PDF Downloads 109
1009 Interventions for Children with Autism Using Interactive Technologies

Authors: Maria Hopkins, Sarah Koch, Fred Biasini

Abstract:

Autism is lifelong disorder that affects one out of every 110 Americans. The deficits that accompany Autism Spectrum Disorders (ASD), such as abnormal behaviors and social incompetence, often make it extremely difficult for these individuals to gain functional independence from caregivers. These long-term implications necessitate an immediate effort to improve social skills among children with an ASD. Any technology that could teach individuals with ASD necessary social skills would not only be invaluable for the individuals affected, but could also effect a massive saving to society in treatment programs. The overall purpose of the first study was to develop, implement, and evaluate an avatar tutor for social skills training in children with ASD. “Face Say” was developed as a colorful computer program that contains several different activities designed to teach children specific social skills, such as eye gaze, joint attention, and facial recognition. The children with ASD were asked to attend to FaceSay or a control painting computer game for six weeks. Children with ASD who received the training had an increase in emotion recognition, F(1, 48) = 23.04, p < 0.001 (adjusted Ms 8.70 and 6.79, respectively) compared to the control group. In addition, children who received the FaceSay training had higher post-test scored in facial recognition, F(1, 48) = 5.09, p < 0.05 (adjusted Ms: 38.11 and 33.37, respectively) compared to controls. The findings provide information about the benefits of computer-based training for children with ASD. Recent research suggests the value of also using socially assistive robots with children who have an ASD. Researchers investigating robots as tools for therapy in ASD have reported increased engagement, increased levels of attention, and novel social behaviors when robots are part of the social interaction. The overall goal of the second study was to develop a social robot designed to teach children specific social skills such as emotion recognition. The robot is approachable, with both an animal-like appearance and features of a human face (i.e., eyes, eyebrows, mouth). The feasibility of the robot is being investigated in children ages 7-12 to explore whether the social robot is capable of forming different facial expressions to accurately display emotions similar to those observed in the human face. The findings of this study will be used to create a potentially effective and cost efficient therapy for improving the cognitive-emotional skills of children with autism. Implications and study findings using the robot as an intervention tool will be discussed.

Keywords: autism, intervention, technology, emotions

Procedia PDF Downloads 355
1008 Ergonomic Adaptations in Visually Impaired Workers - A Literature Review

Authors: Kamila Troper, Pedro Mestre, Maria Lurdes Menano, Joana Mendonça, Maria João Costa, Sandra Demel

Abstract:

Introduction: Visual impairment is a problem that has an influence on hundreds of thousands of people all over the world. Although it is possible for a Visually Impaired person to do most jobs, the right training, technological assistance, and emotional support are essential. Ergonomics be able to solve many of the problems/issues with the relative ease of positioning, lighting and design of the workplace. A little forethought can make a tremendous difference to the ease with which a person with an impairment function. Objectives: Review the main ergonomic adaptation measures reported in the literature in order to promote better working conditions and safety measures for the visually impaired. Methodology: This was an exploratory-descriptive, qualitative literature systematic review study. The main databases used were: PubMed, BIREME, LILACS, with articles and studies published between 2000 and 2021. Results: Based on the principles of the theoretical references of ergonomic analysis of work, the main restructuring of the physical space of the workstations were: Accessibility facilities and assistive technologies; A screen reader that captures information from a computer and sends it in real-time to a speech synthesizer or Braille terminal; Installations of software with voice recognition, Monitors with enlarged screens; Magnification software; Adequate lighting, magnifying lenses in addition to recommendations regarding signage and clearance of the places where the visually impaired pass through. Conclusions: Employability rates for people with visual impairments(both those who are blind and those who have low vision)are low and continue to be a concern to the world and for researchers as a topic of international interest. Although numerous authors have identified barriers to employment and proposed strategies to remediate or circumvent those barriers, people with visual impairments continue to experience high rates of unemployment.

Keywords: ergonomic adaptations, visual impairments, ergonomic analysis of work, systematic review

Procedia PDF Downloads 157
1007 Fire Resilient Cities: The Impact of Fire Regulations, Technological and Community Resilience

Authors: Fanny Guay

Abstract:

Building resilience, sustainable buildings, urbanization, climate change, resilient cities, are just a few examples of where the focus of research has been in the last few years. It is obvious that there is a need to rethink how we are building our cities and how we are renovating our existing buildings. However, the question remaining is how can we assure that we are building sustainable yet resilient cities? There are many aspects one can touch upon when discussing resilience in cities, but after the event of Grenfell in June 2017, it has become clear that fire resilience must be a priority. We define resilience as a holistic approach including communities, society and systems, focusing not only on resisting the effects of a disaster, but also how it will cope and recover from it. Cities are an example of such a system, where components such as buildings have an important role to play. A building on fire will have an impact on the community, the economy, the environment, and so the entire system. Therefore, we believe that fire and resilience go hand in hand when we discuss building resilient cities. This article aims at discussing the current state of the concept of fire resilience and suggests actions to support the built of more fire resilient buildings. Using the case of Grenfell and the fire safety regulations in the UK, we will briefly compare the fire regulations in other European countries, more precisely France, Germany and Denmark, to underline the difference and make some suggestions to increase fire resilience via regulation. For this research, we will also include other types of resilience such as technological resilience, discussing the structure of buildings itself, as well as community resilience, considering the role of communities in building resilience. Our findings demonstrate that to increase fire resilience, amending existing regulations might be necessary, for example, how we performed reaction to fire tests and how we classify building products. However, as we are looking at national regulations, we are only able to make general suggestions for improvement. Another finding of this research is that the capacity of the community to recover and adapt after a fire is also an essential factor. Fundamentally, fire resilience, technological resilience and community resilience are closely connected. Building resilient cities is not only about sustainable buildings or energy efficiency; it is about assuring that all the aspects of resilience are included when building or renovating buildings. We must ask ourselves questions as: Who are the users of this building? Where is the building located? What are the components of the building, how was it designed and which construction products have been used? If we want to have resilient cities, we must answer these basic questions and assure that basic factors such as fire resilience are included in our assessment.

Keywords: buildings, cities, fire, resilience

Procedia PDF Downloads 140
1006 Safeguarding the Cloud: The Crucial Role of Technical Project Managers in Security Management for Cloud Environments

Authors: Samuel Owoade, Zainab Idowu, Idris Ajibade, Abel Uzoka

Abstract:

Cloud computing adoption continues to soar, with 83% of enterprise workloads estimated to be in the cloud by 2022. However, this rapid migration raises security concerns, needing strong security management solutions to safeguard sensitive data and essential applications. This paper investigates the critical role of technical project managers in orchestrating security management initiatives for cloud environments, evaluating their responsibilities, challenges, and best practices for assuring the resilience and integrity of cloud infrastructures. Drawing from a comprehensive review of industry reports and interviews with cloud security experts, this research highlights the multifaceted landscape of security management in cloud environments. Despite the rapid adoption of cloud services, only 25% of organizations have matured their cloud security practices, indicating a pressing need for effective management strategies. This paper proposes a strategy framework adapted to the demands of technical project managers, outlining the important components of effective cloud security management. Notably, 76% of firms identify misconfiguration as a major source of cloud security incidents, underlining the significance of proactive risk assessment and constant monitoring. Furthermore, the study emphasizes the importance of technical project managers in facilitating cross-functional collaboration, bridging the gap between cybersecurity professionals, cloud architects, compliance officers, and IT operations teams. With 68% of firms seeing difficulties integrating security policies into their cloud systems, effective communication and collaboration are critical to success. Case studies from industry leaders illustrate the practical use of security management projects in cloud settings. These examples demonstrate the importance of technical project managers in using their expertise to address obstacles and generate meaningful outcomes, with 92% of firms reporting improved security practices after implementing proactive security management tactics. In conclusion, this research underscores the critical role of technical project managers in safeguarding cloud environments against evolving threats. By embracing their role as guardians of the cloud realm, project managers can mitigate risks, optimize resource utilization, and uphold the trust and integrity of cloud infrastructures in an era of digital transformation.

Keywords: cloud security, security management, technical project management, cybersecurity, cloud infrastructure, risk management, compliance

Procedia PDF Downloads 21