Search results for: animal identification system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20347

Search results for: animal identification system

9967 A Prototype of an Information and Communication Technology Based Intervention Tool for Children with Dyslexia

Authors: Rajlakshmi Guha, Sajjad Ansari, Shazia Nasreen, Hirak Banerjee, Jiaul Paik

Abstract:

Dyslexia is a neurocognitive disorder, affecting around fifteen percent of the Indian population. The symptoms include difficulty in reading alphabet, words, and sentences. This can be difficult at the phonemic or recognition level and may further affect lexical structures. Therapeutic intervention of dyslexic children post assessment is generally done by special educators and psychologists through one on one interaction. Considering the large number of children affected and the scarcity of experts, access to care is limited in India. Moreover, unavailability of resources and timely communication with caregivers add on to the problem of proper intervention. With the development of Educational Technology and its use in India, access to information and care has been improved in such a large and diverse country. In this context, this paper proposes an ICT enabled home-based intervention program for dyslexic children which would support the child, and provide an interactive interface between expert, parents, and students. The paper discusses the details of the database design and system layout of the program. Along with, it also highlights the development of different technical aids required to build out personalized android applications for the Indian dyslexic population. These technical aids include speech database creation for children, automatic speech recognition system, serious game development, and color coded fonts. The paper also emphasizes the games developed to assist the dyslexic child on cognitive training primarily for attention, working memory, and spatial reasoning. In addition, it talks about the specific elements of the interactive intervention tool that makes it effective for home based intervention of dyslexia.

Keywords: Android applications, cognitive training, dyslexia, intervention

Procedia PDF Downloads 281
9966 Contribution of PALB2 and BLM Mutations to Familial Breast Cancer Risk in BRCA1/2 Negative South African Breast Cancer Patients Detected Using High-Resolution Melting Analysis

Authors: N. C. van der Merwe, J. Oosthuizen, M. F. Makhetha, J. Adams, B. K. Dajee, S-R. Schneider

Abstract:

Women representing high-risk breast cancer families, who tested negative for pathogenic mutations in BRCA1 and BRCA2, are four times more likely to develop breast cancer compared to women in the general population. Sequencing of genes involved in genomic stability and DNA repair led to the identification of novel contributors to familial breast cancer risk. These include BLM and PALB2. Bloom's syndrome is a rare homozygous autosomal recessive chromosomal instability disorder with a high incidence of various types of neoplasia and is associated with breast cancer when in a heterozygous state. PALB2, on the other hand, binds to BRCA2 and together, they partake actively in DNA damage repair. Archived DNA samples of 66 BRCA1/2 negative high-risk breast cancer patients were retrospectively selected based on the presence of an extensive family history of the disease ( > 3 affecteds per family). All coding regions and splice-site boundaries of both genes were screened using High-Resolution Melting Analysis. Samples exhibiting variation were bi-directionally automated Sanger sequenced. The clinical significance of each variant was assessed using various in silico and splice site prediction algorithms. Comprehensive screening identified a total of 11 BLM and 26 PALB2 variants. The variants detected ranged from global to rare and included three novel mutations. Three BLM and two PALB2 likely pathogenic mutations were identified that could account for the disease in these extensive breast cancer families in the absence of BRCA mutations (BLM c.11T > A, p.V4D; BLM c.2603C > T, p.P868L; BLM c.3961G > A, p.V1321I; PALB2 c.421C > T, p.Gln141Ter; PALB2 c.508A > T, p.Arg170Ter). Conclusion: The study confirmed the contribution of pathogenic mutations in BLM and PALB2 to the familial breast cancer burden in South Africa. It explained the presence of the disease in 7.5% of the BRCA1/2 negative families with an extensive family history of breast cancer. Segregation analysis will be performed to confirm the clinical impact of these mutations for each of these families. These results justify the inclusion of both these genes in a comprehensive breast and ovarian next generation sequencing cancer panel and should be screened simultaneously with BRCA1 and BRCA2 as it might explain a significant percentage of familial breast and ovarian cancer in South Africa.

Keywords: Bloom Syndrome, familial breast cancer, PALB2, South Africa

Procedia PDF Downloads 217
9965 Vertebrate Model to Examine the Biological Effectiveness of Different Radiation Qualities

Authors: Rita Emília Szabó, Róbert Polanek, Tünde Tőkés, Zoltán Szabó, Szabolcs Czifrus, Katalin Hideghéty

Abstract:

Purpose: Several feature of zebrafish are making them amenable for investigation on therapeutic approaches such as ionizing radiation. The establishment of zebrafish model for comprehensive radiobiological research stands in the focus of our investigation, comparing the radiation effect curves of neutron and photon irradiation. Our final aim is to develop an appropriate vertebrate model in order to investigate the relative biological effectiveness of laser driven ionizing radiation. Methods and Materials: After careful dosimetry series of viable zebrafish embryos were exposed to a single fraction whole-body neutron-irradiation (1,25; 1,875; 2; 2,5 Gy) at the research reactor of the Technical University of Budapest and to conventional 6 MeV photon beam at 24 hour post-fertilization (hpf). The survival and morphologic abnormalities (pericardial edema, spine curvature) of each embryo were assessed for each experiment at 24-hour intervals from the point of fertilization up to 168 hpf (defining the dose lethal for 50% (LD50)). Results: In the zebrafish embryo model LD50 at 20 Gy dose level was defined and the same lethality were found at 2 Gy dose from the reactor neutron beam resulting RBE of 10. Dose-dependent organ perturbations were detected on macroscopic (shortening of the body length, spine curvature, microcephaly, micro-ophthalmia, micrognathia, pericardial edema, and inhibition of yolk sac resorption) and microscopic (marked cellular changes in skin, cardiac, gastrointestinal system) with the same magnitude of dose difference. Conclusion: In our observations, we found that zebrafish embryo model can be used for investigating the effects of different type of ionizing radiation and this system proved to be highly efficient vertebrate model for preclinical examinations.

Keywords: ionizing radiation, LD50, relative biological effectiveness, zebrafish embryo

Procedia PDF Downloads 295
9964 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 131
9963 Exploring Nature and Pattern of Mentoring Practices: A Study on Mentees' Perspectives

Authors: Nahid Parween Anwar, Sadia Muzaffar Bhutta, Takbir Ali

Abstract:

Mentoring is a structured activity which is designed to facilitate engagement between mentor and mentee to enhance mentee’s professional capability as an effective teacher. Both mentor and mentee are important elements of the ‘mentoring equation’ and play important roles in nourishing this dynamic, collaborative and reciprocal relationship. Cluster-Based Mentoring Programme (CBMP) provides an indigenous example of a project which focused on development of primary school teachers in selected clusters with a particular focus on their classroom practice. A study was designed to examine the efficacy of CBMP as part of Strengthening Teacher Education in Pakistan (STEP) project. This paper presents results of one of the components of this study. As part of the larger study, a cross-sectional survey was employed to explore nature and patterns of mentoring process from mentees’ perspectives in the selected districts of Sindh and Balochistan. This paper focuses on the results of the study related to the question: What are mentees’ perceptions of their mentors’ support for enhancing their classroom practice during mentoring process? Data were collected from mentees (n=1148) using a 5-point scale -‘Mentoring for Effective Primary Teaching’ (MEPT). MEPT focuses on seven factors of mentoring: personal attributes, pedagogical knowledge, modelling, feedback, system requirement, development and use of material, and gender equality. Data were analysed using SPSS 20. Mentees perceptions of mentoring practice of their mentors were summarized using mean and standard deviation. Results showed that mean scale scores on mentees’ perceptions of their mentors’ practices fell between 3.58 (system requirement) and 4.55 (personal attributes). Mentees’ perceives personal attribute of the mentor as the most significant factor (M=4.55) towards streamlining mentoring process by building good relationship between mentor and mentees. Furthermore, mentees have shared positive views about their mentors efforts towards promoting gender impartiality (M=4.54) during workshop and follow up visit. Contrary to this, mentees felt that more could have been done by their mentors in sharing knowledge about system requirement (e.g. school policies, national curriculum). Furthermore, some of the aspects in high scoring factors were highlighted by the mentees as areas for further improvement (e.g. assistance in timetabling, written feedback, encouragement to develop learning corners). Mentees’ perceptions of their mentors’ practices may assist in determining mentoring needs. The results may prove useful for the professional development programme for the mentors and mentees for specific mentoring programme in order to enhance practices in primary classrooms in Pakistan. Results would contribute into the body of much-needed knowledge from developing context.

Keywords: cluster-based mentoring programme, mentoring for effective primary teaching (MEPT), professional development, survey

Procedia PDF Downloads 221
9962 Designing Energy Efficient Buildings for Seasonal Climates Using Machine Learning Techniques

Authors: Kishor T. Zingre, Seshadhri Srinivasan

Abstract:

Energy consumption by the building sector is increasing at an alarming rate throughout the world and leading to more building-related CO₂ emissions into the environment. In buildings, the main contributors to energy consumption are heating, ventilation, and air-conditioning (HVAC) systems, lighting, and electrical appliances. It is hypothesised that the energy efficiency in buildings can be achieved by implementing sustainable technologies such as i) enhancing the thermal resistance of fabric materials for reducing heat gain (in hotter climates) and heat loss (in colder climates), ii) enhancing daylight and lighting system, iii) HVAC system and iv) occupant localization. Energy performance of various sustainable technologies is highly dependent on climatic conditions. This paper investigated the use of machine learning techniques for accurate prediction of air-conditioning energy in seasonal climates. The data required to train the machine learning techniques is obtained using the computational simulations performed on a 3-story commercial building using EnergyPlus program plugged-in with OpenStudio and Google SketchUp. The EnergyPlus model was calibrated against experimental measurements of surface temperatures and heat flux prior to employing for the simulations. It has been observed from the simulations that the performance of sustainable fabric materials (for walls, roof, and windows) such as phase change materials, insulation, cool roof, etc. vary with the climate conditions. Various renewable technologies were also used for the building flat roofs in various climates to investigate the potential for electricity generation. It has been observed that the proposed technique overcomes the shortcomings of existing approaches, such as local linearization or over-simplifying assumptions. In addition, the proposed method can be used for real-time estimation of building air-conditioning energy.

Keywords: building energy efficiency, energyplus, machine learning techniques, seasonal climates

Procedia PDF Downloads 105
9961 Optimal Design of Wind Turbine Blades Equipped with Flaps

Authors: I. Kade Wiratama

Abstract:

As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.

Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero

Procedia PDF Downloads 326
9960 An Ergonomic Evaluation of Three Load Carriage Systems for Reducing Muscle Activity of Trunk and Lower Extremities during Giant Puppet Performing Tasks

Authors: Cathy SW. Chow, Kristina Shin, Faming Wang, B. C. L. So

Abstract:

During some dynamic giant puppet performances, an ergonomically designed load carrier system is necessary for the puppeteers to carry a giant puppet body’s heavy load with minimum muscle stress. A load carrier (i.e. prototype) was designed with two small wheels on the foot; and a hybrid spring device on the knee in order to assist the sliding and knee bending movements respectively. Thus, the purpose of this study was to evaluate the effect of three load carriers including two other commercially available load mounting systems, Tepex and SuitX, and the prototype. Ten male participants were recruited for the experiment. Surface electromyography (sEMG) was used to collect the participants’ muscle activities during forward moving and bouncing and with and without load of 11.1 kg that was 60 cm above the shoulder. Five bilateral muscles including the lumbar erector spinae (LES), rectus femoris (RF), bicep femoris (BF), tibialis anterior (TA), and gastrocnemius (GM) were selected for data collection. During forward moving task, the sEMG data showed smallest muscle activities by Tepex harness which exhibited consistently the lowest, compared with the prototype and SuitX which were significantly higher on left LES 68.99% and 64.99%, right LES 26.57% and 82.45%; left RF 87.71% and 47.61%, right RF 143.57% and 24.28%; left BF 80.21% and 22.23%, right BF 96.02% and 21.83%; right TA 6.32% and 4.47%; left GM 5.89% and 12.35% respectively. The result above reflected mobility was highly restricted by tested exoskeleton devices. On the other hand, the sEMG data from bouncing task showed the smallest muscle activities by prototype which exhibited consistently the lowest, compared with the Tepex harness and SuitX which were significantly lower on lLES 6.65% and 104.93, rLES 23.56% and 92.19%; lBF 33.21% and 93.26% and rBF 24.70% and 81.16%; lTA 46.51% and 191.02%; rTA 12.75% and 125.76%; IGM 31.54% and 68.36%; rGM 95.95% and 96.43% respectively.

Keywords: exoskeleton, giant puppet performers, load carriage system, surface electromyography

Procedia PDF Downloads 93
9959 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 172
9958 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion Detection System (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw data set for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. These optimal feature subset used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP)

Procedia PDF Downloads 351
9957 Exploring Methods for Urbanization of 'Village in City' in China: A Case Study of Hangzhou

Authors: Yue Wang, Fan Chen

Abstract:

After the economic reform in 1978, the urbanization in China has grown fast. It urged cities to expand in an unprecedented high speed. Villages around were annexed unprepared, and it turned out to be a new type of community called 'village in city.' Two things happened here. First, the locals gave up farming and turned to secondary industry and tertiary industry, as a result of losing their land. Secondly, attracted by the high income in cities and low rent here, plenty of migrants came into the community. This area is important to a city in rapid growth for providing a transitional zone. But thanks to its passivity and low development, 'village in city' has caused lots of trouble to the city. Densities of population and construction are both high, while facilities are severely inadequate. Unplanned and illegal structures are built, which creates a complex mixed-function area and leads to a bad residential area. Besides, the locals have a strong property right consciousness for the land. It holds back the transformation and development of the community. Although the land capitalization can bring significant benefits, it’s inappropriate to make a great financial compensation to the locals, and considering the large population of city migrants, it’s important to explore the relationship among the 'village in city,' city immigrants and the city itself. Taking the example of Hangzhou, this paper analyzed the developing process, functions spatial distribution, industrial structure and current traffic system of 'village in city.' Above the research on the community, this paper put forward a common method to make urban planning through the following ways: adding city functions, building civil facilities, re-planning functions spatial distribution, changing the constitution of local industry and planning new traffic system. Under this plan, 'village in city' finally can be absorbed into cities and make its own contribution to the urbanization.

Keywords: China, city immigrant, urbanization, village in city

Procedia PDF Downloads 206
9956 Estimating Affected Croplands and Potential Crop Yield Loss of an Individual Farmer Due to Floods

Authors: Shima Nabinejad, Holger Schüttrumpf

Abstract:

Farmers who are living in flood-prone areas such as coasts are exposed to storm surges increased due to climate change. Crop cultivation is the most important economic activity of farmers, and in the time of flooding, agricultural lands are subject to inundation. Additionally, overflow saline water causes more severe damage outcomes than riverine flooding. Agricultural crops are more vulnerable to salinity than other land uses for which the economic damages may continue for a number of years even after flooding and affect farmers’ decision-making for the following year. Therefore, it is essential to assess what extent the agricultural areas are flooded and how much the associated flood damage to each individual farmer is. To address these questions, we integrated farmers’ decision-making at farm-scale with flood risk management. The integrated model includes identification of hazard scenarios, failure analysis of structural measures, derivation of hydraulic parameters for the inundated areas and analysis of the economic damages experienced by each farmer. The present study has two aims; firstly, it attempts to investigate the flooded cropland and potential crop damages for the whole area. Secondly, it compares them among farmers’ field for three flood scenarios, which differ in breach locations of the flood protection structure. To achieve its goal, the spatial distribution of fields and cultivated crops of farmers were fed into the flood risk model, and a 100-year storm surge hydrograph was selected as the flood event. The study area was Pellworm Island that is located in the German Wadden Sea National Park and surrounded by North Sea. Due to high salt content in seawater of North Sea, crops cultivated in the agricultural areas of Pellworm Island are 100% destroyed by storm surges which were taken into account in developing of depth-damage curve for analysis of consequences. As a result, inundated croplands and economic damages to crops were estimated in the whole Island which was further compared for six selected farmers under three flood scenarios. The results demonstrate the significance and the flexibility of the proposed model in flood risk assessment of flood-prone areas by integrating flood risk management and decision-making.

Keywords: crop damages, flood risk analysis, individual farmer, inundated cropland, Pellworm Island, storm surges

Procedia PDF Downloads 245
9955 Functional Neurocognitive Imaging (fNCI): A Diagnostic Tool for Assessing Concussion Neuromarker Abnormalities and Treating Post-Concussion Syndrome in Mild Traumatic Brain Injury Patients

Authors: Parker Murray, Marci Johnson, Tyson S. Burnham, Alina K. Fong, Mark D. Allen, Bruce McIff

Abstract:

Purpose: Pathological dysregulation of Neurovascular Coupling (NVC) caused by mild traumatic brain injury (mTBI) is the predominant source of chronic post-concussion syndrome (PCS) symptomology. fNCI has the ability to localize dysregulation in NVC by measuring blood-oxygen-level-dependent (BOLD) signaling during the performance of fMRI-adapted neuropsychological evaluations. With fNCI, 57 brain areas consistently affected by concussion were identified as PCS neural markers, which were validated on large samples of concussion patients and healthy controls. These neuromarkers provide the basis for a computation of PCS severity which is referred to as the Severity Index Score (SIS). The SIS has proven valuable in making pre-treatment decisions, monitoring treatment efficiency, and assessing long-term stability of outcomes. Methods and Materials: After being scanned while performing various cognitive tasks, 476 concussed patients received an SIS score based on the neural dysregulation of the 57 previously identified brain regions. These scans provide an objective measurement of attentional, subcortical, visual processing, language processing, and executive functioning abilities, which were used as biomarkers for post-concussive neural dysregulation. Initial SIS scores were used to develop individualized therapy incorporating cognitive, occupational, and neuromuscular modalities. These scores were also used to establish pre-treatment benchmarks and measure post-treatment improvement. Results: Changes in SIS were calculated in percent change from pre- to post-treatment. Patients showed a mean improvement of 76.5 percent (σ= 23.3), and 75.7 percent of patients showed at least 60 percent improvement. Longitudinal reassessment of 24 of the patients, measured an average of 7.6 months post-treatment, shows that SIS improvement is maintained and improved, with an average of 90.6 percent improvement from their original scan. Conclusions: fNCI provides a reliable measurement of NVC allowing for identification of concussion pathology. Additionally, fNCI derived SIS scores direct tailored therapy to restore NVC, subsequently resolving chronic PCS resulting from mTBI.

Keywords: concussion, functional magnetic resonance imaging (fMRI), neurovascular coupling (NVC), post-concussion syndrome (PCS)

Procedia PDF Downloads 330
9954 Hypoglycemic and Hypolipidemic Effects of Aqueous Flower Extract from Nyctanthes arbor-tristis L.

Authors: Brahmanage S. Rangika, Dinithi C. Peiris

Abstract:

Boiled Aqueous Flower Extract (AFE) of Nyctanthes arbor-tristis L. (Family: Oleaceae) is used in traditional Sri Lankan medicinal system to treat diabetes. However, this is not scientifically proven and the mechanisms by which the flowers reduce diabetes have not been investigated. The present study was carried out to examine the hypoglycemic potential and toxicity effects of aqueous flower extract of N. arbor-tristis. AFE was prepared and mice were treated orally either with 250, 500, and 750 mg/kg of AFE or distilled water (Control). Fasting and random blood glucose levels were determined. In addition, the toxicity of AFE was determined using chronic oral administration. In normoglycemic mice, mid dose (500mg/kg) of AFE significantly (p < 0.01) reduced fasting blood glucose levels by 49% at 4h post treatment. Further, 500mg/kg of AFE significantly (p < 0.01) lowered random blood glucose level of non-fasted normoglycemic mice. AFE significantly lowered total cholesterol and triglyceride levels while increasing the HDL levels in the serum. Further, AFE significantly inhibited the glucose absorption from the lumen of the intestine and it increases the diaphragm uptake of glucose. Alpha-amylase inhibitory activity was also evident. However, AFE did not induce any overt signs of toxicity or hepatotoxicity. There were no adverse effects on food and water intake and body weight of mice during the experimental period. It can be concluded that AFE of N. arbor-tristis posses safe oral anti diabetic potentials mediated via multiple mechanisms. Results of the present study scientifically proved the claims made about the uses of N. arbor-tristis in the treatment of diabetes mellitus in traditional Sri Lankan medicinal system. Further, flowers can also be used for as a remedy to improve blood lipid profile.

Keywords: aqueous extract, hypoglycemic hypolipidemic, Nyctanthes arbor-tristis flowers, hepatotoxicity

Procedia PDF Downloads 357
9953 Removal of Cr (VI) from Water through Adsorption Process Using GO/PVA as Nanosorbent

Authors: Syed Hadi Hasan, Devendra Kumar Singh, Viyaj Kumar

Abstract:

Cr (VI) is a known toxic heavy metal and has been considered as a priority pollutant in water. The effluent of various industries including electroplating, anodizing baths, leather tanning, steel industries and chromium based catalyst are the major source of Cr (VI) contamination in the aquatic environment. Cr (VI) show high mobility in the environment and can easily penetrate cell membrane of the living tissues to exert noxious effects. The Cr (VI) contamination in drinking water causes various hazardous health effects to the human health such as cancer, skin and stomach irritation or ulceration, dermatitis, damage to liver, kidney circulation and nerve tissue damage. Herein, an attempt has been done to develop an efficient adsorbent for the removal of Cr (VI) from water. For this purpose nanosorbent composed of polyvinyl alcohol functionalized graphene oxide (GO/PVA) was prepared. Thus, obtained GO/PVA was characterized through FTIR, XRD, SEM, and Raman Spectroscopy. As prepared nanosorbent of GO/PVA was utilized for the removal Cr (VI) in batch mode experiment. The process variables such as contact time, initial Cr (VI) concentration, pH, and temperature were optimized. The maximum 99.8 % removal of Cr (VI) was achieved at initial Cr (VI) concentration 60 mg/L, pH 2, temperature 35 °C and equilibrium was achieved within 50 min. The two widely used isotherm models viz. Langmuir and Freundlich were analyzed using linear correlation coefficient (R2) and it was found that Langmuir model gives best fit with high value of R2 for the data of present adsorption system which indicate the monolayer adsorption of Cr (VI) on the GO/PVA. Kinetic studies were also conducted using pseudo-first order and pseudo-second order models and it was observed that chemosorptive pseudo-second order model described the kinetics of current adsorption system in better way with high value of correlation coefficient. Thermodynamic studies were also conducted and results showed that the adsorption was spontaneous and endothermic in nature.

Keywords: adsorption, GO/PVA, isotherm, kinetics, nanosorbent, thermodynamics

Procedia PDF Downloads 381
9952 An Investigation Enhancing E-Voting Application Performance

Authors: Aditya Verma

Abstract:

E-voting using blockchain provides us with a distributed system where data is present on each node present in the network and is reliable and secure too due to its immutability property. This work compares various blockchain consensus algorithms used for e-voting applications in the past, based on performance and node scalability, and chooses the optimal one and improves on one such previous implementation by proposing solutions for the loopholes of the optimally working blockchain consensus algorithm, in our chosen application, e-voting.

Keywords: blockchain, parallel bft, consensus algorithms, performance

Procedia PDF Downloads 155
9951 An Investigation of Wind Loading Effects on the Design of Elevated Steel Tanks with Lattice Tower Supporting Structures

Authors: J. van Vuuren, D. J. van Vuuren, R. Muigai

Abstract:

In recent times, South Africa has experienced extensive droughts that created the need for reliable small water reservoirs. These reservoirs have comparatively quick fabrication and installation times compared to market alternatives. An elevated water tank has inherent potential energy, resulting in that no additional water pumps are required to sustain water pressure at the outlet point – thus ensuring that, without electricity, a water source is available. The initial construction formwork and the complex geometric shape of concrete towers that requires casting can become time-consuming, rendering steel towers preferable. Reinforced concrete foundations, cast in advance, are required to be of sufficient strength. Thereafter, the prefabricated steel supporting structure and tank, which consist of steel panels, can be assembled and erected on site within a couple of days. Due to the time effectiveness of this system, it has become a popular solution to aid drought-stricken areas. These sites are normally in rural, schools or farmland areas. As these tanks can contain up to 2000kL (approximately 19.62MN) of water, combined with supporting lattice steel structures ranging between 5m and 30m in height, failure of one of the supporting members will result in system failure. Thus, there is a need to gain a comprehensive understanding of the operation conditions because of wind loadings on both the tank and the supporting structure. The aim of the research is to investigate the relationship between the theoretical wind loading on a lattice steel tower in combination with an elevated sectional steel tank, and the current wind loading codes, as applicable to South Africa. The research compares the respective design parameters (both theoretical and wind loading codes) whereby FEA analyses are conducted on the various design solutions. The currently available wind loading codes are not sufficient to design slender cantilever latticed steel towers that support elevated water storage tanks. Numerous factors in the design codes are not comprehensively considered when designing the system as these codes are dependent on various assumptions. Factors that require investigation for the study are; the wind loading angle to the face of the structure that will result in maximum load; the internal structural effects on models with different bracing patterns; the loading influence of the aspect ratio of the tank; and the clearance height of the tank on the structural members. Wind loads, as the variable that results in the highest failure rate of cantilevered lattice steel tower structures, require greater understanding. This study aims to contribute towards the design process of elevated steel tanks with lattice tower supporting structures.

Keywords: aspect ratio, bracing patterns, clearance height, elevated steel tanks, lattice steel tower, wind loads

Procedia PDF Downloads 137
9950 Life Cycle Assessment of Mass Timber Structure, Construction Process as System Boundary

Authors: Mahboobeh Hemmati, Tahar Messadi, Hongmei Gu

Abstract:

Today, life cycle assessment (LCA) is a leading method in mitigating the environmental impacts emerging from the building sector. In this paper, LCA is used to quantify the Green House Gas (GHG) emissions during the construction phase of the largest mass timber residential structure in the United States, Adohi Hall. This building is a 200,000 square foot 708-bed complex located on the campus of the University of Arkansas. The energy used for buildings’ operation is the most dominant source of emissions in the building industry. Lately, however, the efforts were successful at increasing the efficiency of building operation in terms of emissions. As a result, the attention is now shifted to the embodied carbon, which is more noticeable in the building life cycle. Unfortunately, most of the studies have, however, focused on the manufacturing stage, and only a few have addressed to date the construction process. Specifically, less data is available about environmental impacts associated with the construction of mass timber. This study presents, therefore, an assessment of the environmental impact of the construction processes based on the real and newly built mass timber building mentioned above. The system boundary of this study covers modules A4 and A5 based on building LCA standard EN 15978. Module A4 includes material and equipment transportation. Module A5 covers the construction and installation process. This research evolves through 2 stages: first, to quantify materials and equipment deployed in the building, and second, to determine the embodied carbon associated with running equipment for construction materials, both transported to, and installed on, the site where the edifice is built. The Global Warming Potential (GWP) of the building is the primary metric considered in this research. The outcomes of this study bring to the front a better understanding of hotspots in terms of emission during the construction process. Moreover, the comparative analysis of the mass timber construction process with that of a theoretically similar steel building will enable an effective assessment of the environmental efficiency of mass timber.

Keywords: construction process, GWP, LCA, mass timber

Procedia PDF Downloads 153
9949 E-Learning Platform for School Kids

Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.

Abstract:

E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.

Keywords: math, education games, e-learning platform, artificial intelligence

Procedia PDF Downloads 139
9948 A Quantitative Study on the “Unbalanced Phenomenon” of Mixed-Use Development in the Central Area of Nanjing Inner City Based on the Meta-Dimensional Model

Authors: Yang Chen, Lili Fu

Abstract:

Promoting urban regeneration in existing areas has been elevated to a national strategy in China. In this context, because of the multidimensional sustainable effect through the intensive use of land, mixed-use development has become an important objective for high-quality urban regeneration in the inner city. However, in the long period of time since China's reform and opening up, the "unbalanced phenomenon" of mixed-use development in China's inner cities has been very serious. On the one hand, the excessive focus on certain individual spaces has led to an increase in the level of mixed-use development in some areas, substantially ahead of others, resulting in a growing gap between different parts of the inner city; On the other hand, the excessive focus on a one-dimensional element of the spatial organization of mixed-use development, such as the enhancement of functional mix or spatial capacity, has led to a lagging phenomenon or neglect in the construction of other dimensional elements, such as pedestrian permeability, green environmental quality, social inclusion, etc. This phenomenon is particularly evident in the central area of the inner city, and it clearly runs counter to the need for sustainable development in China's new era. Therefore, a rational qualitative and quantitative analysis of the "unbalanced phenomenon" will help to identify the problem and provide a basis for the formulation of relevant optimization plans in the future. This paper builds a dynamic evaluation method of mixed-use development based on a meta-dimensional model and then uses spatial evolution analysis and spatial consistency analysis with ArcGIS software to reveal the "unbalanced phenomenon " in over the past 40 years of the central city area in Nanjing, a China’s typical city facing regeneration. This study result finds that, compared to the increase in functional mix and capacity, the dimensions of residential space mix, public service facility mix, pedestrian permeability, and greenness in Nanjing's city central area showed different degrees of lagging improvement, and the unbalanced development problems in each part of the city center are different, so the governance and planning plan for future mixed-use development needs to fully address these problems. The research methodology of this paper provides a tool for comprehensive dynamic identification of mixed-use development level’s change, and the results deepen the knowledge of the evolution of mixed-use development patterns in China’s inner cities and provide a reference basis for future regeneration practices.

Keywords: mixed-use development, unbalanced phenomenon, the meta-dimensional model, over the past 40 years of Nanjing, China

Procedia PDF Downloads 87
9947 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 248
9946 Technological and Economic Investigation of Concentrated Photovoltaic and Thermal Systems: A Case Study of Iran

Authors: Moloud Torkandam

Abstract:

Any cities must be designed and built in a way that minimizes their need for fossil fuel. Undoubtedly, the necessity of accepting this principle in the previous eras is undeniable with respect to the mode of constructions. Perhaps only due to the great diversity of materials and new technologies in the contemporary era, such a principle in buildings has been forgotten. The question of optimizing energy consumption in buildings has attracted a great deal of attention in many countries and, in this way, they have been able to cut down the consumption of energy up to 30 percent. The energy consumption is remarkably higher than global standards in our country, and the most important reason is the undesirable state of buildings from the standpoint of energy consumption. In addition to providing the means to protect the natural and fuel resources for the future generations, reducing the use of fossil energies may also bring about desirable outcomes such as the decrease in greenhouse gases (whose emissions cause global warming, the melting of polar ice, the rise in sea level and the climatic changes of the planet earth), the decrease in the destructive effects of contamination in residential complexes and especially urban environments and preparation for national self-sufficiency and the country’s independence and preserving national capitals. This research realize that in this modern day and age, living sustainably is a pre-requisite for ensuring a bright future and high quality of life. In acquiring this living standard, we will maintain the functions and ability of our environment to serve and sustain our livelihoods. Electricity is now an integral part of modern life, a basic necessity. In the provision of electricity, we are committed to respecting the environment by reducing the use of fossil fuels through the use of proven technologies that use local renewable and natural resources as its energy source. As far as this research concerned it is completely necessary to work on different type of energy producing such as solar and CPVT system.

Keywords: energy, photovoltaic, termal system, solar energy, CPVT

Procedia PDF Downloads 68
9945 A Numerical Model for Simulation of Blood Flow in Vascular Networks

Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia

Abstract:

An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.

Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system

Procedia PDF Downloads 257
9944 Spatial Variability of Renieramycin-M Production in the Philippine Blue Sponge, Xestospongia Sp.

Authors: Geminne Manzano, Porfirio Aliño, Clairecynth Yu, Lilibeth Salvador-Reyes, Viviene Santiago

Abstract:

Many marine benthic organisms produce secondary metabolites that serve as ecological roles to different biological and environmental factors. The secondary metabolites found in these organisms like algae, sponges, tunicates and worms exhibit variation at different scales. Understanding the chemical variation can be essential in deriving the evolutionary and ecological function of the secondary metabolites that may explain their patterns. Ecological surveys were performed on two collection sites representing from two Philippine marine biogeographic regions – in Oriental Mindoro located on the West Philippine Sea (WPS) and in Zamboanga del Sur located at Celebes Sea (CS), where a total of 39 Xestospongia sp. sponges were collected using SCUBA. The sponge samples were transported to the laboratory for taxonomic identification and chemical analysis. Biological and environmental factors were investigated to determine their relation to the abundance and distribution patterns and its spatial variability of their secondary metabolite production. Extracts were subjected to thin-layer chromatography and anti-proliferative assays to confirm the presence of Renieramycin-M and to test its cytotoxicity. The blue sponges were found to be more abundant on the WPS than in CS. Both the benthic community and the fish community in Oriental Mindoro, WPS and Zamboanga del Sur, CS sites are characterized by high species diversity and abundance and a very high biomass category. Environmental factors like depth and monsoonal exposure were also compared showing that wave exposure and depth are associated with the abundance and distribution of the sponges. Renieramycin-M presence using the TLC profiles between the sponge extracts from WPS and from CS showed differences in the Reniermycin-M presence and the presence of other functional groups were observed between the two sites. In terms of bioactivity, different responses were also exhibited by the sponge extracts coming from the different region. Different responses were also noted on its bioactivity depending on the cell lines tested. Exploring the influence of ecological parameters on the chemical variation can provide deeper chemical ecological insights in the knowledge and their potential varied applications at different scales. The results of this study provide further impetus in pursuing studies into patterns and processes of the chemical diversity of the Philippine blue sponge, Xestospongia sp. and the chemical ecological significance of the coral triangle.

Keywords: chemical ecology, porifera, renieramycin-m, spatial variability, Xestospongia sp.

Procedia PDF Downloads 200
9943 In Vitro Antimycoplasmal Activity of Peganum harmala on Mycoplasma hominis Tunisian Strains

Authors: Nadine khadraoui, Rym Essid, Olfa Tabbene, Imen Chniba, Safa Boujemaa, Selim Jallouli, Nadia Fares, Behija Mlik, Boutheina Ben Abdelmoumen Mardassi

Abstract:

Background and aim: Mycoplasma hominis is an opportunistic pathogen that can cause various gynecological infections such cervicitis, infertility, and, less frequently, extra-genital infections. Previous studies on the antimicrobial susceptibility of Mycoplasma hominis Tunisian strains have highlighted a significant resistance, even multi-resistance, to the most used antibiotic in the therapy of consequential infections. To address this concern, the present study aimed for the alternative of phytotherapy. Peganum harmala seed extract was tested as an antibacterial agent against multidrug-resistant M.hominis clinical strains. Material and Methods: Peganum harmala plant was collected from Ain Sebaa, Tabarka, North West region of Tunisia in April 2018, air-dried, grounded and extracted by different solvents.The crude methanolic extract was further partitioned with n-HEX, DCM, EtOAC and n-BuOl. Antibacterial activity was evaluated against M. hominis ATCC 23114 and 20 M. hominis clinical strains.The antimycoplasmal activity was tested by the microdilution method, and MIC values were determined. Phytochemical analysis and hemolytic activity on human erythrocytes were also performed. The active fraction was then subjected to purification, and the chemical identification of the active compound was investigated. Results: Among the tested fractions, the n-BuOH extract was the most active fraction since it exhibited an inhibitory effect against M. hominis ATCC 23114 and 80% of the tested clinical strains with MIC between 125 and 1000 µg/ml. The phytochemical analysis of the n-BuOH revealed its metabolic abundance in polyphenols, flavonoids and condensed tannin with levels of 257.37 mg AGE/g, 172.27 mg EC/g and 58.27 mg EC/g, respectively. In addition, P. harmala n-BuOH extract exhibited potent bactericidal activity against all M. hominis isolates with CMB values ranging between 125 and 4000 µg/ml. Further, the active fraction exhibited weak cytotoxicity effect at active concentrations when tested on human erythrocytes. The active compound was identified by gas chromatography–mass spectrometry as an indole alkaloid harmaline. Conclusion: In summary, Peganum harmala extract demonstrated an interesting anti-mycoplasmal activity against M. hominis Tunisian strains. Therefore, it could be considered as a potential candidate for the treatment of consequential infections. However, further studies are necessary to evaluate its mechanism of action in mycoplasmas.

Keywords: mycoplasma hominis, peganum harmala, antibioresistance, phytotherapy, phytochemical analysis

Procedia PDF Downloads 100
9942 Teratogenic Effect of Bisphenol A in Development of Balb/C Mouse

Authors: Nazihe Sedighi, Mohsen Nokhbatolphoghaei

Abstract:

Bisphenol A (BPA) is a monomer used in the manufacture of polycarbonate plastics. Due to having properties such as transparency, heat and impact resistance, it is used widely in medicine, sorts, electronic components, and food containers. It is also used in the production of resins which is applied for lining cans. BPA releases from resins and polycarbonate when it is heated or continuously used the containers from which BPA can enter the body. There are several reports indicating the presence of BPA in the placenta, amniotic fluid, and the embryo itself. While researchers investigated the teratogenic effect of BPA on embryos, very limited work has been done on the effects of BPA when applied from early stages of development. In this study, The teratogenic effect of BPA was investigated at earliest preimplantation (day zero) through day 15.5 of the development of Balb/C mouse embryos. After ensuring the pregnancy via observing vaginal plug, Pregnant mice were divided into five groups. For the three experimental groups, the amount of 500, 750, and 1000 mg/kg/d Bisphenol A was given orally according to body weight. The sham group that was treated with sesame oil, which was used as vehicle and control group remained intact. On day 18.5 of gestation, embryos were removed from the uterus. Randomly half of the embryo were fixed in Bouin for tissue analysis. The other half were prepared for skeletal system staining using Alizarin Red and alcian blue dies. The results showed that the embryonic weight and the crown-rump length of embryos decreased significantly (P < 0.05) in all experimental groups compared to the control group and the sham. In this study, skeletal abnormalities such as delay in ossification of skull and limbs as well as the deviation in the backbone were seen. This research suggests that pregnant mothers need to be aware of possible teratogenic effects of BPA at any stage of pregnancy especially from early to mid stages. In this case, pregnant mothers may need to stop using any manufacture of polycarbonate plastics, as a container for food or drinking.

Keywords: bisphenol A, development, polycarbonate plastic, skeletal system, teratogenicity

Procedia PDF Downloads 281
9941 Oil Logistics for Refining to Northern Europe

Authors: Vladimir Klepikov

Abstract:

To develop the programs to supply crude oil to North European refineries, it is necessary to take into account the refineries’ location, crude refining capacity, and the transport infrastructure capacity. Among the countries of the region, we include those having a marine boundary along the Northern Sea and the Baltic Sea (from France in the west to Finland in the east). The paper envisages the geographic allocation of the refineries and contains the evaluation of the refineries’ capacities for the region under review. The sustainable operations of refineries in the region are determined by the transportation system capacity to supply crude oil to them. The assessment of capacity of crude oil transportation to the refineries is conducted. The research is performed for the period of 2005/2015, using the quantitative analysis method. The countries are classified by the refineries’ aggregate capacities and the crude oil output on their territory. The crude oil output capacities in the region in the period under review are determined. The capacities of the region’s transportation system to supply crude oil produced in the region to the refineries are revealed. The analysis suggested that imported raw materials are the main source of oil for the refineries in the region. The main sources of crude oil supplies to North European refineries are reviewed. The change in the refineries’ capacities in the group of countries and each particular country, as well as the utilization of the refineries' capacities in the region in the period under review, was studied. The input suggests that the bulk of crude oil is supplied by marine and pipeline transport. The paper contains the assessment of the crude oil transportation by pipeline transport in the overall crude oil cargo flow. The refineries’ production rate for the groups of countries under the review and for each particular country was the subject of study. Our study yielded the trend towards the increase in the crude oil refining at the refineries of the region and reduction in the crude oil output. If this trend persists in the near future, the cargo flow of imported crude oil and the utilization of the North European logistics infrastructure may increase. According to the study, the existing transport infrastructure in the region is able to handle the increasing imported crude oil flow.

Keywords: European region, infrastructure, oil terminal capacity, pipeline capacity, tanker draft

Procedia PDF Downloads 158
9940 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 87
9939 Brain Connectome of Glia, Axons, and Neurons: Cognitive Model of Analogy

Authors: Ozgu Hafizoglu

Abstract:

An analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with physical, behavioral, principal relations that are essential to learning, discovery, and innovation. The Cognitive Model of Analogy (CMA) leads and creates patterns of pathways to transfer information within and between domains in science, just as happens in the brain. The connectome of the brain shows how the brain operates with mental leaps between domains and mental hops within domains and the way how analogical reasoning mechanism operates. This paper demonstrates the CMA as an evolutionary approach to science, technology, and life. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions in the new era, especially post-pandemic. In this paper, we will reveal how to draw an analogy to scientific research to discover new systems that reveal the fractal schema of analogical reasoning within and between the systems like within and between the brain regions. Distinct phases of the problem-solving processes are divided thusly: stimulus, encoding, mapping, inference, and response. Based on the brain research so far, the system is revealed to be relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain’s mechanism in macro context; brain and spinal cord, and micro context: glia and neurons, relative to matching conditions of analogical reasoning and relational information, encoding, mapping, inference and response processes, and verification of perceptual responses in four-term analogical reasoning. Finally, we will relate all these terminologies with these mental leaps, mental maps, mental hops, and mental loops to make the mental model of CMA clear.

Keywords: analogy, analogical reasoning, brain connectome, cognitive model, neurons and glia, mental leaps, mental hops, mental loops

Procedia PDF Downloads 155
9938 Thermal Method Production of the Hydroxyapatite from Bone By-Products from Meat Industry

Authors: Agnieszka Sobczak-Kupiec, Dagmara Malina, Klaudia Pluta, Wioletta Florkiewicz, Bozena Tyliszczak

Abstract:

Introduction: Request for compound of phosphorus grows continuously, thus, it is searched for alternative sources of this element. One of these sources could be by-products from meat industry which contain prominent quantity of phosphorus compounds. Hydroxyapatite, which is natural component of animal and human bones, is leading material applied in bone surgery and also in stomatology. This is material, which is biocompatible, bioactive and osteoinductive. Methodology: Hydroxyapatite preparation: As a raw material was applied deproteinized and defatted bone pulp called bone sludge, which was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Hydroxyapatite was received in calcining process in chamber kiln with electric heating in air atmosphere in two stages. In the first stage, material was calcining in temperature 600°C within 3 hours. In the next stage unified material was calcining in three different temperatures (750°C, 850°C and 950°C) keeping material in maximum temperature within 3.0 hours. Bone sludge: Bone sludge was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Pork bones coming from the partition of meat were used as a raw material for the production of the protein hydrolysate. After disintegration, a mixture of bone pulp and water with a small amount of lactic acid was boiled at temperature 130-135°C and under pressure4 bar. After 3-3.5 hours boiled-out bones were separated on a sieve, and the solution of protein-fat hydrolysate got into a decanter, where bone sludge was separated from it. Results of the study: The phase composition was analyzed by roentgenographic method. Hydroxyapatite was the only crystalline phase observed in all the calcining products. XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Conclusion: The researches were shown that phosphorus content is around 12%, whereas, calcium content amounts to 28% on average. The conducted researches on bone-waste calcining at the temperatures of 750-950°C confirmed that thermal utilization of deproteinized bone-waste was possible. X-ray investigations were confirmed that hydroxyapatite is the main component of calcining products, and also XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Contents of calcium and phosphorus were distinctly increased with calcining temperature, whereas contents of phosphorus soluble in acids were decreased. It could be connected with higher crystallization degree of material received in higher temperatures and its stable structure. Acknowledgements: “The authors would like to thank the The National Centre for Research and Development (Grant no: LIDER//037/481/L-5/13/NCBR/2014) for providing financial support to this project”.

Keywords: bone by-products, bone sludge, calcination, hydroxyapatite

Procedia PDF Downloads 273