Search results for: trauma network
458 Searching SNPs Variants in Myod-1 and Myod-2 Genes Linked to Body Weight in Gilthead Seabream, Sparus aurata L.
Authors: G. Blanco-Lizana, C. García-Fernández, J. A. Sánchez
Abstract:
Growth is a productive trait regulated by a large and complex gene network with very different effect. Some of they (candidate genes) have a higher effect and are excellent resources to search in them polymorphisms correlated with differences in growth rates. This study was focused on the identification of single nucleotide polymorphism (SNP) in MyoD-1 and MyoD-2 genes, members of the family of myogenic regulatory genes with a key role in the differentiation and development of muscular tissue.(MFRs), and its evaluation as potential markers in genetic selection programs for growth in gilthead sea bream (Sparus aurata). Through a sequencing in 30 seabream (classified as unrelated by microsatellite markers) of 1.968bp in MyoD-1 gene [AF478568 .1] and 1.963bp in MyoD-2 gene [AF478569.1], three SNPs were identified in each gene (SaMyoD-1 D2100A (D indicate a deletion) SaMyoD-1 A2143G and SaMyoD-1 A2404G and SaMyoD-2_A785C, SaMyoD-2_C1982T and SaMyoD-2_A2031T). The relationships between SNPs and body weight were evaluated by SNP genotyping of 53 breeders from two broodstocks (A:18♀-9♂; B:16♀-10♂) and 389 offspring divided into two groups (slow- and fast-growth) with significant differences in growth at 18 months of development (A18Slow: N=107, A18Fast: N=103, B18Slow: N=92 and B18Fast: N=87) (Borrell et al., 2011). Haplotype and diplotype were reconstructed from genotype data by Phase 2.1 software. Differences among means of different diplotypes were calculated by one-way ANOVA followed by post-hoc Tukey test. Association analysis indicated that single SNP did not show significant effect on body weight. However, when the analysis is carried out considering haplotype data it was observed that the DGG haplotipe of MyoD-1 gen and CCA haplotipe of MyoD- 2gen were associated to with lower body weight. This haplotype combination always showed the lowest mean body weight (P<0.05) in three (A18Slow, A18Fast & B18Slow) of the four groups tested. Individuals with DGG haplotipe of MyoD-1 gen have a 25,5% and those with CCA haplotipe of MyoD- 2gen showed 14-18% less on mean body weight. Although further studies are need to validate the role of these 3 SNPs as marker for body weight, the polymorphism-trait association established in this work create promising expectations on the use of these variants as genetic tool for future giltead seabream breeding programs.Keywords: growth, MyoD-1 and MyoD-2 genes, selective breeding, SNP-haplotype
Procedia PDF Downloads 332457 Application of the Building Information Modeling Planning Approach to the Factory Planning
Authors: Peggy Näser
Abstract:
Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.Keywords: building information modeling, digital factory, digital planning, factory planning
Procedia PDF Downloads 269456 A Double-Blind, Randomized, Controlled Trial on N-Acetylcysteine for the Prevention of Acute Kidney Injury in Patients Undergoing Allogeneic Hematopoietic Stem Cell Transplantation
Authors: Sara Ataei, Molouk Hadjibabaie, Amirhossein Moslehi, Maryam Taghizadeh-Ghehi, Asieh Ashouri, Elham Amini, Kheirollah Gholami, Alireza Hayatshahi, Mohammad Vaezi, Ardeshir Ghavamzadeh
Abstract:
Acute kidney injury (AKI) is one of the complications of hematopoietic stem cell transplantation and is associated with increased mortality. N-acetylcysteine (NAC) is a thiol compound with antioxidant and vasodilatory properties that has been investigated for the prevention of AKI in several clinical settings. In the present study, we evaluated the effects of intravenous NAC on the prevention of AKI in allogeneic hematopoietic stem cell transplantation patients. A double-blind randomized placebo-controlled trial was conducted, and 80 patients were recruited to receive 100 mg/kg/day NAC or placebo as intermittent intravenous infusion from day -6 to day +15. AKI was determined on the basis of the Risk-Injury-Failure-Loss-Endstage renal disease and AKI Network criteria as the primary outcome. We assessed urine neutrophil gelatinase-associated lipocalin (uNGAL) on days -6, -3, +3, +9, and +15 as the secondary outcome. Moreover, transplant-related outcomes and NAC adverse reactions were evaluated during the study period. Statistical analysis was performed using appropriate parametric and non-parametric methods including Kaplan–Meier for AKI and generalized estimating equation for uNGAL. At the end of the trial, data from 72 patients were analyzed (NAC: 33 patients and placebo: 39 patients). Participants of each group were not different considering baseline characteristics. AKI was observed in 18% of NAC recipients and 15% of placebo group patients, and the occurrence pattern was not significantly different (p = 0.73). Moreover, no significant difference was observed between groups for uNGAL measures (p = 0.10). Transplant-related outcomes were similar for both groups, and all patients had successful engraftment. Three patients did not tolerate NAC because of abdominal pain, shortness of breath and rash with pruritus and were dropped from the intervention group before transplantation. However, the frequency of adverse reactions was not significantly different between groups. In conclusion, our findings could not show any clinical benefits from high-dose NAC particularly for AKI prevention in allogeneic hematopoietic stem cell transplantation patients.Keywords: acute kidney injury, N-acetylcysteine, hematopoietic stem cell transplantation, urine neutrophil gelatinase-associated lipocalin, randomized controlled trial
Procedia PDF Downloads 434455 On the Other Side of Shining Mercury: In Silico Prediction of Cold Stabilizing Mutations in Serine Endopeptidase from Bacillus lentus
Authors: Debamitra Chakravorty, Pratap K. Parida
Abstract:
Cold-adapted proteases enhance wash performance in low-temperature laundry resulting in a reduction in energy consumption and wear of textiles and are also used in the dehairing process in leather industries. Unfortunately, the possible drawbacks of using cold-adapted proteases are their instability at higher temperatures. Therefore, proteases with broad temperature stability are required. Unfortunately, wild-type cold-adapted proteases exhibit instability at higher temperatures and thus have low shelf lives. Therefore, attempts to engineer cold-adapted proteases by protein engineering were made previously by directed evolution and random mutagenesis. The lacuna is the time, capital, and labour involved to obtain these variants are very demanding and challenging. Therefore, rational engineering for cold stability without compromising an enzyme's optimum pH and temperature for activity is the current requirement. In this work, mutations were rationally designed with the aid of high throughput computational methodology of network analysis, evolutionary conservation scores, and molecular dynamics simulations for Savinase from Bacillus lentus with the intention of rendering the mutants cold stable without affecting their temperature and pH optimum for activity. Further, an attempt was made to incorporate a mutation in the most stable mutant rationally obtained by this method to introduce oxidative stability in the mutant. Such enzymes are desired in detergents with bleaching agents. In silico analysis by performing 300 ns molecular dynamics simulations at 5 different temperatures revealed that these three mutants were found to be better in cold stability compared to the wild type Savinase from Bacillus lentus. Conclusively, this work shows that cold adaptation without losing optimum temperature and pH stability and additionally stability from oxidative damage can be rationally designed by in silico enzyme engineering. The key findings of this work were first, the in silico data of H5 (cold stable savinase) used as a control in this work, corroborated with its reported wet lab temperature stability data. Secondly, three cold stable mutants of Savinase from Bacillus lentus were rationally identified. Lastly, a mutation which will stabilize savinase against oxidative damage was additionally identified.Keywords: cold stability, molecular dynamics simulations, protein engineering, rational design
Procedia PDF Downloads 140454 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor
Authors: Sourabh Jain, S. S. Jain
Abstract:
Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.Keywords: ITS strategies, congestion, planning, mobility, safety
Procedia PDF Downloads 179453 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation
Procedia PDF Downloads 96452 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems
Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos
Abstract:
As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model
Procedia PDF Downloads 159451 Using Lean-Six Sigma Philosophy to Enhance Revenues and Improve Customer Satisfaction: Case Studies from Leading Telecommunications Service Providers in India
Authors: Senthil Kumar Anantharaman
Abstract:
Providing telecommunications based network services in developing countries like India which has a population of 1.5 billion people, so that these services reach every individual, is one of the greatest challenges the country has been facing in its journey towards economic growth and development. With growing number of telecommunications service providers in the country, a constant challenge that has been faced by these providers is in providing not only quality but also delightful customer experience while simultaneously generating enhanced revenues and profits. Thus, the role played by process improvement methodologies like Six Sigma cannot be undermined and specifically in telecom service provider based operations, it has provided substantial benefits. Therefore, it advantages are quite comparable to its applications and advantages in other sectors like manufacturing, financial services, information technology-based services and Healthcare services. One of the key reasons that this methodology has been able to reap great benefits in telecommunications sector is that this methodology has been combined with many of its competing process improvement techniques like Theory of Constraints, Lean and Kaizen to give the maximum benefit to the service providers thereby creating a winning combination of organized process improvement methods for operational excellence thereby leading to business excellence. This paper discusses about some of the key projects and areas in the end to end ‘Quote to Cash’ process at big three Indian telecommunication companies that have been highly assisted by applying Six Sigma along with other process improvement techniques. While the telecommunication companies which we have considered, is primarily in India and run by both private operators and government based setups, the methodology can be applied equally well in any other part of developing countries around the world having similar context. This study also compares the enhanced revenues that can arise out of appropriate opportunities in emerging market scenarios, that Six Sigma as a philosophy and methodology can provide if applied with vigour and robustness. Finally, the paper also comes out with a winning framework in combining Six Sigma methodology with Kaizen, Lean and Theory of Constraints that will enhance both the top-line as well as the bottom-line while providing the customers a delightful experience.Keywords: emerging markets, lean, process improvement, six sigma, telecommunications, theory of constraints
Procedia PDF Downloads 164450 Emerging VC Industry and the Important Role of Marketing Expectations in Project Selection: Evidence on Russian Data
Authors: I. Rodionov, A. Semenov, E. Gosteva, O. Sokolova
Abstract:
Currently, the venture capital becomes more and more advanced and effective source of the innovation project financing, connected with a high-risk level. In the developed countries, it plays a key role in transforming innovation projects into successful businesses and creating prosperity of the modern economy. Actually, in Russia there are many necessary preconditions for creation of the effective venture investment system: the network of the public institutes for innovation financing operates; there is a significant number of the small and medium-sized enterprises, capable to sell production with good market potential. However, the current system does not confirm the necessary level of efficiency in practice that can be substantially explained by the absence of the accurate plan of action to form the national venture model and by the lack of experience of successful venture deals with profitable exits in Russian economy. This paper studies the influence of various factors on the venture industry development by the example of the IT-sector in Russia. The choice of the sector is based on the fact, that this segment is the main driver of the venture capital market growth in Russia, and the necessary set of data exists. The size of investment of the second round is used as the dependent variable. To analyse the influence of the previous round such determinant as the volume of the previous (first) round investments is used. There is also used a dummy variable in regression to examine that the participation of an investor with high reputation and experience in the previous round can influence the size of the next investment round. The regression analysis of short-term interrelations between studied variables reveals prevailing influence of the volume of the first round investments on the venture investments volume of the second round. Because of the research, the participation of investors with first-class reputation has a small impact on an indicator of the value of investment of the second round. The expected positive dependence of the second round investments on the forecasted market growth rate now of the deal is also rejected. So, the most important determinant of the value of the second-round investment is the value of first–round investment, so it means that the most competitive on the Russian market are the start-up teams which can attract more money on the start, and the target market growth is not the factor of crucial importance.Keywords: venture industry, venture investment, determinants of the venture sector development, IT-sector
Procedia PDF Downloads 354449 Gains and Pitfalls of Participating on International Staff Exchange Programs: Individual Experiences of Academic Staff of Makerere University, Uganda
Authors: David Onen
Abstract:
Staff exchanges amongst different work organizations are a growing international phenomenon. In higher education in particular, it is not only the staff participating on international exchange programs, but their students as well. The practice of exchanging staff is premised on the belief that participating members of staff would not only get the chance to network with colleagues from partner institutions but also gain the opportunity for knowledge sharing and skills development. As a result, it would not only benefit the participating individual staff but their institutions too. However, in practice, staff exchange programs everywhere are not all ‘a bed of roses’. In fact, some of the programs seem to be laden with unapparent source of trouble or danger for the participating staff. This paper is a report on an on-going study investigating the experiences of members of academic staff of Makerere University in Uganda who have ever participated on international staff exchange programs. The study is aimed at documenting individual experiences in order to stimulate, not only a debate, but practical ways of enriching the experiences of staff who engage on well-meant international staff exchange programs. The study has employed an exploratory survey research design in which self-administered questionnaire and interview guide are being used to collect data from university academic staff respondents selected through snow-ball and purposive sampling techniques. Data have been analysed with the use of appropriate descriptive and inferential statistics as well as content analysis techniques. Preliminary study findings reveal that the majority of the respondents (95.5%) were, to a large extent, fully satisfied with their participation on the staff exchange programs. Many attested to gaining new experience (97%), networking (75%), gaining new knowledge (94%), acquiring new skills (88%), and therefore bringing to their institutions something ‘new’ and ‘beneficial’. However, a reasonably large percentage (57%) of the participants too expressed dissatisfaction in the institutional support that Makerere University gave them during their participation on the exchange programs. Some respondents reported about the ‘unfriendly welcome’ they received upon returning ‘home’ because colleagues detested how they were chosen to participate on such programs. The researcher thus concluded that international staff exchange programs are truly beneficial to both the participating staff and their institutions though with pitfalls. The researcher thus recommended for mutual and preferably equal engagement of the participating institutions on staff exchange programs if such programs are to benefit both the participating staff and institutions. Besides, exchange programs require clear terms of cooperation including on how staff are selected, facilitated and what are expected of the sending and host institutions as well as the concerned staff.Keywords: gains, exchange programs, higher education, pitfalls
Procedia PDF Downloads 345448 Integrating Circular Economy Framework into Life Cycle Analysis: An Exploratory Study Applied to Geothermal Power Generation Technologies
Authors: Jingyi Li, Laurence Stamford, Alejandro Gallego-Schmid
Abstract:
Renewable electricity has become an indispensable contributor to achieving net-zero by the mid-century to tackle climate change. Unlike solar, wind, or hydro, geothermal was stagnant in its electricity production development for decades. However, with the significant breakthrough made in recent years, especially the implementation of enhanced geothermal systems (EGS) in various regions globally, geothermal electricity could play a pivotal role in alleviating greenhouse gas emissions. Life cycle assessment has been applied to analyze specific geothermal power generation technologies, which proposed suggestions to optimize its environmental performance. For instance, selecting a high heat gradient region enables a higher flow rate from the production well and extends the technical lifespan. Although such process-level improvements have been made, the significance of geothermal power generation technologies so far has not explicitly displayed its competitiveness on a broader horizon. Therefore, this review-based study integrates a circular economy framework into life cycle assessment, clarifying the underlying added values for geothermal power plants to complete the sustainability profile. The derived results have provided an enlarged platform to discuss geothermal power generation technologies: (i) recover the heat and electricity from the process to reduce the fossil fuel requirements; (ii) recycle the construction materials, such as copper, steel, and aluminum for future projects; (iii) extract the lithium ions from geothermal brine and make geothermal reservoir become a potential supplier of the lithium battery industry; (iv) repurpose the abandoned oil and gas wells to build geothermal power plants; (v) integrate geothermal energy with other available renewable energies (e.g., solar and wind) to provide heat and electricity as a hybrid system at different weather; (vi) rethink the fluids used in stimulation process (EGS only), replace water with CO2 to achieve negative emissions from the system. These results provided a new perspective to the researchers, investors, and policymakers to rethink the role of geothermal in the energy supply network.Keywords: climate, renewable energy, R strategies, sustainability
Procedia PDF Downloads 137447 RNA-Seq Analysis of the Wild Barley (H. spontaneum) Leaf Transcriptome under Salt Stress
Authors: Ahmed Bahieldin, Ahmed Atef, Jamal S. M. Sabir, Nour O. Gadalla, Sherif Edris, Ahmed M. Alzohairy, Nezar A. Radhwan, Mohammed N. Baeshen, Ahmed M. Ramadan, Hala F. Eissa, Sabah M. Hassan, Nabih A. Baeshen, Osama Abuzinadah, Magdy A. Al-Kordy, Fotouh M. El-Domyati, Robert K. Jansen
Abstract:
Wild salt-tolerant barley (Hordeum spontaneum) is the ancestor of cultivated barley (Hordeum vulgare or H. vulgare). Although the cultivated barley genome is well studied, little is known about genome structure and function of its wild ancestor. In the present study, RNA-Seq analysis was performed on young leaves of wild barley treated with salt (500 mM NaCl) at four different time intervals. Transcriptome sequencing yielded 103 to 115 million reads for all replicates of each treatment, corresponding to over 10 billion nucleotides per sample. Of the total reads, between 74.8 and 80.3% could be mapped and 77.4 to 81.7% of the transcripts were found in the H. vulgare unigene database (unigene-mapped). The unmapped wild barley reads for all treatments and replicates were assembled de novo and the resulting contigs were used as a new reference genome. This resultedin94.3 to 95.3%oftheunmapped reads mapping to the new reference. The number of differentially expressed transcripts was 9277, 3861 of which were uni gene-mapped. The annotated unigene- and de novo-mapped transcripts (5100) were utilized to generate expression clusters across time of salt stress treatment. Two-dimensional hierarchical clustering classified differential expression profiles into nine expression clusters, four of which were selected for further analysis. Differentially expressed transcripts were assigned to the main functional categories. The most important groups were ‘response to external stimulus’ and ‘electron-carrier activity’. Highly expressed transcripts are involved in several biological processes, including electron transport and exchanger mechanisms, flavonoid biosynthesis, reactive oxygen species (ROS) scavenging, ethylene production, signaling network and protein refolding. The comparisons demonstrated that mRNA-Seq is an efficient method for the analysis of differentially expressed genes and biological processes under salt stress.Keywords: electron transport, flavonoid biosynthesis, reactive oxygen species, rnaseq
Procedia PDF Downloads 393446 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential
Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen
Abstract:
Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance
Procedia PDF Downloads 395445 Neural Networks Underlying the Generation of Neural Sequences in the HVC
Authors: Zeina Bou Diab, Arij Daou
Abstract:
The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird
Procedia PDF Downloads 72444 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 78443 Developing a Shared Understanding of Wellbeing: An Exploratory Study in Irish Primary Schools Incorporating the Voices of Teachers
Authors: Fionnuala Tynan, Margaret Nohilly
Abstract:
Wellbeing in not only a national priority in Ireland but in the international context. A review of the literature highlights the consistent efforts of researchers to define the concept of wellbeing. This study sought to explore the understating of Wellbeing in Irish primary schools. National Wellbeing Guidelines in the Irish context frame the concept of wellbeing through a mental health paradigm, which is but one aspect of wellbeing. This exploratory research sought the views of Irish primary-school teachers on their understanding of the concept of wellbeing and the practical application of strategies to promote wellbeing both in the classroom and across the school. Teacher participants from four counties in the West of Ireland were invited to participate in focus group discussion and workshops through the Education Centre Network. The purpose of this process was twofold; firstly to explore teachers’ understanding of wellbeing in the primary school context and, secondly, for teachers to be co-creators in the development of practical strategies for classroom and whole school implementation. The voice of the teacher participants was central to the research design. The findings of this study indicate that the definition of wellbeing in the Irish context is too abstract a definition for teachers and the focus on mental health dominates the discourse in relation to wellbeing. Few teachers felt that they were addressing wellbeing adequately in their classrooms and across the school. The findings from the focus groups highlighted that while teachers are incorporating a range of wellbeing strategies including mindfulness and positive psychology, there is a clear disconnect between the national definition and the implementation of national curricula which causes them concern. The teacher participants requested further practical strategies to promote wellbeing at whole school and classroom level within the framework of the Irish Primary School Curriculum and enable them to become professionally confident in developing a culture of wellbeing. In conclusion, considering wellbeing is a national priority in Ireland, this research promoted the timely discussion the wellbeing guidelines and the development of a conceptual framework to define wellbeing in concrete terms for practitioners. The centrality of teacher voices ensured the strategies proposed by this research is both practical and effective. The findings of this research have prompted the development of a national resource which will support the implementation of wellbeing in the primary school at both national and international level.Keywords: primary education, shared understanding, teacher voice, wellbeing
Procedia PDF Downloads 459442 Reviewers’ Perception of the Studio Jury System: How They View its Value in Architecture and Design Education
Authors: Diane M. Bender
Abstract:
In architecture and design education, students learn and understand their discipline through lecture courses and within studios. A studio is where the instructor works closely with students to help them understand design by doing design work. The final jury is the culmination of the studio learning experience. It’s value and significance are rarely questioned. Students present their work before their peers, instructors, and invited reviewers, known as jurors. These jurors are recognized experts who add a breadth of feedback to students mostly in the form of a verbal critique of the work. Since the design review or jury has been a common element of studio education for centuries, jurors themselves have been instructed in this format. Therefore, they understand its value from both a student and a juror perspective. To better understand how these reviewers see the value of a studio review, a survey was distributed to reviewers at a multi-disciplinary design school within the United States. Five design disciplines were involved in this case study: architecture, graphic design, industrial design, interior design, and landscape architecture. Respondents (n=108) provided written comments about their perceived value of the studio review system. The average respondent was male (64%), between 40-49 years of age, and has attained a master’s degree. Qualitative analysis with thematic coding revealed several themes. Reviewers view the final jury as important because it provides a variety of perspectives from unbiased external practitioners and prepares students for similar presentation challenges they will experience in professional practice. They also see it as a way to validate the assessment and evaluation of students by faculty. In addition, they see a personal benefit for themselves and their firm – the ability to network with fellow jurors, professors, and students (i.e., future colleagues). Respondents also provided additional feedback about the jury system and studio education in general. Typical responses included a desire for earlier engagement with students; a better explanation from the instructor about the project parameters, rubrics/grading, and guidelines for juror involvement; a way to balance giving encouraging feedback versus overly critical comments; and providing training for jurors prior to reviews. While this study focused on the studio review, the findings are equally applicable to other disciplines. Suggestions will be provided on how to improve the preparation of guests in the learning process and how their interaction can positively influence student engagement.Keywords: assessment, design, jury, studio
Procedia PDF Downloads 65441 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services
Authors: Carlos V. Gonzalez
Abstract:
Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.Keywords: harm, patients, resilience, safety, mental illness, disability
Procedia PDF Downloads 449440 Use of 3D Printed Bioscaffolds from Decellularized Umbilical Cord for Cartilage Regeneration
Authors: Tayyaba Bari, Muhammad Hamza Anjum, Samra Kanwal, Fakhera Ikram
Abstract:
Osteoarthritis, a degenerative condition, affects more than 213 million individuals globally. Since articular cartilage has no or limited vessels, therefore, after deteriorating, it is unable to rejuvenate. Traditional approaches for cartilage repair, like autologous chondrocyte implantation, microfracture and cartilage transplantation are often associated with postoperative complications and lead to further degradation. Decellularized human umbilical cord has gained interest as a viable treatment for cartilage repair. Decellularization removes all cellular contents as well as debris, leaving a biologically active 3D network known as extracellular matrix (ECM). This matrix is biodegradable, non-immunogenic and provides a microenvironment for homeostasis, growth and repair. UC derived bioink function as 3D scaffolding material, not only mediates cell-matrix interactions but also adherence, proliferation and propagation of cells for 3D organoids. This study comprises different physical, chemical and biological approaches to optimize the decellularization of human umbilical cord (UC) tissues followed by the solubilization of these tissues to bioink formation. The decellularization process consisted of two cycles of freeze thaw where the umbilical cord at -20˚C was thawed at room temperature followed by dissection in small sections from 0.5 to 1cm. Similarly decellularization with ionic and non-ionic detergents Sodium dodecyl sulfate (SDS) and Triton-X 100 revealed that both concentrations of SDS i.e 0.1% and 1% were effective in complete removal of cells from the small UC tissues. The results of decellularization was further confirmed by running them on 1% agarose gel. Histological analysis revealed the efficacy of decellularization, which involves paraffin embedded samples of 4μm processed for Hematoxylin-eosin-safran and 4,6-diamidino-2-phenylindole (DAPI). ECM preservation was confirmed by Alcian Blue, and Masson’s trichrome staining on consecutive sections and images were obtained. Sulfated GAG’s content were determined by 1,9-dimethyl-methylene blue (DMMB) assay, similarly collagen quantification was done by hydroxy proline assay. This 3D bioengineered scaffold will provide a typical atmosphere as in the extracellular matrix of the tissue, which would be seeded with the mesenchymal cells to generate the desired 3D ink for in vitro and in vivo cartilage regeneration applications.Keywords: umbilical cord, 3d printing, bioink, tissue engineering, cartilage regeneration
Procedia PDF Downloads 102439 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty
Authors: Ammar Y. Alqahtani
Abstract:
In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics
Procedia PDF Downloads 137438 Antigen Stasis can Predispose Primary Ciliary Dyskinesia (PCD) Patients to Asthma
Authors: Nadzeya Marozkina, Joe Zein, Benjamin Gaston
Abstract:
Introduction: We have observed that many patients with Primary Ciliary Dyskinesia (PCD) benefit from asthma medications. In healthy airways, the ciliary function is normal. Antigens and irritants are rapidly cleared, and NO enters the gas phase normally to be exhaled. In the PCD airways, however, antigens, such as Dermatophagoides, are not as well cleared. This defect leads to oxidative stress, marked by increased DUOX1 expression and decreased superoxide dismutase [SOD] activity (manuscript under revision). H₂O₂, in high concentrations in the PCD airway, injures the airway. NO is oxidized rather than being exhaled, forming cytotoxic peroxynitrous acid. Thus, antigen stasis on PCD airway epithelium leads to airway injury and may predispose PCD patients to asthma. Indeed, recent population genetics suggest that PCD genes may be associated with asthma. We therefore hypothesized that PCD patients would be predisposed to having asthma. Methods. We analyzed our database of 18 million individual electronic medical records (EMRs) in the Indiana Network for Patient Care research database (INPCR). There is not an ICD10 code for PCD itself; code Q34.8 is most commonly used clinically. To validate analysis of this code, we queried patients who had an ICD10 code for both bronchiectasis and situs inversus totalis in INPCR. We also studied a validation cohort using the IBM Explorys® database (over 80 million individuals). Analyses were adjusted for age, sex and race using a 1 PCD: 3 controls matching method in INPCR and multivariable logistic regression in the IBM Explorys® database. Results. The prevalence of asthma ICD10 codes in subjects with a code Q34.8 was 67% vs 19% in controls (P < 0.0001) (Regenstrief Institute). Similarly, in IBM*Explorys, the OR [95% CI] for having asthma if a patient also had ICD10 code 34.8, relative to controls, was =4.04 [3.99; 4.09]. For situs inversus alone the OR [95% CI] was 4.42 [4.14; 4.71]; and bronchiectasis alone the OR [95% CI] =10.68 (10.56; 10.79). For both bronchiectasis and situs inversus together, the OR [95% CI] =28.80 (23.17; 35.81). Conclusions: PCD causes antigen stasis in the human airway (under review), likely predisposing to asthma in addition to oxidative and nitrosative stress and to airway injury. Here, we show that, by several different population-based metrics, and using two large databases, patients with PCD appear to have between a three- and 28-fold increased risk of having asthma. These data suggest that additional studies should be undertaken to understand the role of ciliary dysfunction in the pathogenesis and genetics of asthma. Decreased antigen clearance caused by ciliary dysfunction may be a risk factor for asthma development.Keywords: antigen, PCD, asthma, nitric oxide
Procedia PDF Downloads 107437 Understanding the Relationship between Community and the Preservation of Cultural Landscape - Focusing on Organically Evolved Landscapes
Authors: Adhithy Menon E., Biju C. A.
Abstract:
Heritage monuments were first introduced to the public in the 1960s when the concept of preserving them was introduced. As a result of the 1990s, the concept of cultural landscapes gained importance, emphasizing the importance of culture and heritage in the context of the landscape. It is important to note that this paper is primarily concerned with the second category of ecological landscapes, which is organically evolving landscapes, as they represent a complex network of tangible, intangible, and environment, and the connections they share with the communities in which they are situated. The United Nations Educational, Scientific, and Cultural Organization has identified 39 cultural sites as being in danger, including the Iranian city of Bam and the historic city of Zabid in Yemen. To ensure its protection in the future, it is necessary to conduct a detailed analysis of the factors contributing to this degradation. An analysis of selected cultural landscapes from around the world is conducted to determine which parameters cause their degradation. The paper follows the objectives of understanding cultural landscapes and their importance for development, followed by examining various criteria for identifying cultural landscapes, their various classifications, as well as agencies that focus on their protection. To identify and analyze the parameters contributing to the deterioration of cultural landscapes based on literature and case studies (cultural landscape of Sintra, Rio de Janeiro, and Varanasi). As a final step, strategies should be developed to enhance deteriorating cultural landscapes based on these parameters. The major findings of the study are the impact of community in the parameters derived - integrity (natural factors, natural disasters, demolition of structures, deterioration of materials), authenticity (living elements, sense of place, building techniques, religious context, artistic expression) public participation (revenue, dependence on locale), awareness (demolition of structures, resource management) disaster management, environmental impact, maintenance of cultural landscape (linkages with other sites, dependence on locale, revenue, resource management). The parameters of authenticity, public participation, awareness, and maintenance of the cultural landscape are directly related to the community in which the cultural landscape is located. Therefore, by focusing on the community and addressing the parameters identified, the deterioration curve of cultural landscapes can be altered.Keywords: community, cultural landscapes, heritage, organically evolved, public participation
Procedia PDF Downloads 88436 City Buses and Sustainable Urban Mobility in Kano Metropolis 1967-2015: An Historical Perspective
Authors: Yusuf Umar Madugu
Abstract:
Since its creation in 1967, Kano has tremendously undergone political, social and economic transformations. Public urban transportation has been playing a vital role in sustaining economic growth of Kano metropolis, especially with the existence of modern buses with the regular network of roads, in all the main centers of trade. This study, therefore, centers on the role of intra-city buses in molding the economy of Kano. Its main focus is post-colonial Kano (i.e. 1967-2015), a period that witnessed rapid expansion of commercial activities and ever increasing urbanization which goes along with it population explosion. The commuters patronized the urban transport, a situation that made the business lucrative. More so, the traders who had come from within and outside Kano relied heavily on commercial vehicles to transport their merchandise to their various destinations. Commercial road transport system, therefore, had become well organized in Kano with a significant number of people earning their means of livelihood from it. It also serves as a source of revenue to governments at different levels. However, the study of transport and development as an academic discipline is inter-disciplinary in nature. This study, therefore, employs the services and the methodologies of other disciplines such as Geography, History, Urban and Regional Planning, Engineering, Computer Science, Economics, etc. to provide a comprehensive picture of the issues under investigation. The source materials for this study included extensive use of written literature and oral information. In view of the crucial importance of intra-city commercial transport services, this study demonstrates its role in the overall economic transformation of the study area. It generally also, contributed in opening up a new ground and looked into the history of commercial transport system. At present, Kano Metropolitan area is located between latitude 110 50’ and 12007’, and longitude 80 22’ and 80 47’ within the Semi-Arid Sudan Savannah Zone of West Africa about 840kilometers of the edge of the Sahara desert. The Metropolitan area has expanded over the years and has become the third largest conurbation in Nigeria with a population of about 4million. It is made up of eight local government areas viz: Kano Municipal, Gwale, Dala, Tarauni, Nasarawa, Fage, Ungogo, and Kumbotso.Keywords: assessment, buses, city, mobility, sustainable
Procedia PDF Downloads 225435 Campaigns of Youth Empowerment and Unemployment In Development Discourses: In the Case of Ethiopia
Abstract:
In today’s high decrement figure of the global economy, nations are facing many economic, social and political challenges; universally, there is high distress of food and other survival insecurity. Further, as a result of conflict, natural disasters, and leadership influences, youths are existentially less empowered and unemployed, especially in developing countries. With this situation to handle well challenges, it’s important to search, investigate and deliberate about youth, unemployment, empowerment and possible management fashions, as youths have the potential to carry and fight such battles. The method adopted is a qualitative analysis of secondary data sources in youth empowerment, unemployment and development as an inclusive framework. Youth unemployment is a major development headache for most African countries. In Ethiopia, following weak youth empowerment, youth unemployment has increased from time to time, and quality education and organization linkage matter as an important constraint. As a management challenge, although accessibility of quality education for Ethiopian youths is an important constraint, the country's youths are fortified deceptively and harassed in a vicious political challenge in their struggle to fetch social and economic changes in the country. Further, thousands of youths are inactivated, criminalized and lost their lives and this makes youths hopeless anger in their lives and pushes them further to be exposed for addictions, prostitution, violence, and illegitimate migrations. This youth challenge wasn’t only destined for African countries; rather, indeed, it was a global burden and headed as a global agenda. As a resolution, the construction of a healthy education system can create independent youths who acquire success and accelerate development. Developing countries should ensue development in the cultivation of empowerment tools through long and short-term education, implementing policy in action, diminishing wide-ranging gaps of (religion, ethnicity & region), and take high youth population as an opportunity and empower them. Further managing and empowering youths to be involved in decision-making, giving political weight and building a network of organizations to easily access job opportunities are important suggestions to save youths in work, for both increasing their income and the country's food security balance.Keywords: development, Ethiopia, management, unemployment, youth empowerment
Procedia PDF Downloads 61434 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 376433 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy
Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren
Abstract:
Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment
Procedia PDF Downloads 38432 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 111431 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 126430 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 106429 Policy Implications of Cashless Banking on Nigeria’s Economy
Authors: Oluwabiyi Adeola Ayodele
Abstract:
This study analysed the Policy and general issues that have arisen over time in Nigeria’ Cashless banking environment as a result of the lack of a Legal framework on Electronic banking in Nigeria. It undertook an in-depth study of the cashless banking system. It discussed the evolution, growth and development of cashless banking in Nigeria; It revealed the expected benefits of the cashless banking system; It appraised regulatory issues and other prevalent problems on cashless banking in Nigeria; and made appropriate recommendations where necessary. The study relied on primary and secondary sources of information. The primary sources included the Constitution of the Federal Republic of Nigeria, Statutes, Conventions and Judicial decisions, while the secondary sources included Books, Journals Articles, Newspapers and Internet Materials. The study revealed that cashless banking has been adopted in Nigeria but still at the developing stage. It revealed that there is no law for the regulation of cashless banking in Nigeria, what Nigeria relies on for regulation is the Central Bank of Nigeria’s Cashless Policy, 2014. The Banks and Other Financial Institutions Act Chapter B3, LFN, 2004 of Nigeria lack provision to accommodate issues on Internet banking. However, under the general principles of legality in criminal law, and by the provisions of the Nigerian Constitution, a person can only be punished for conducts that have been defined to be criminal by written laws with the penalties specifically stated in the law. Although Nigeria has potent laws for the regulation of paper banking, these laws cannot be substituted for paperless transactions. This is because the issues involved in both transactions vary. The study also revealed that the absence of law in the cashless banking environment in Nigeria will subject consumers to endless risks. This study revealed that the creation of banking markets via the Internet relies on both available technologies and appropriate laws and regulations. It revealed however that Law of some of the countries considered on cashless banking has taken care of most of the legal issues and other problems prevalent in the cashless banking environment. The study also revealed some other problems prevalent in the Nigerian cashless banking environment. The study concluded that for Nigeria to find solutions to the legal issues raised in its cashless banking environment and other problems of cashless banking, it should have a viable legal Frame work for internet banking. The study concluded that the Central Bank of Nigeria’s Policy on Cashless banking is not potent enough to tackle the challenges posed to cashless banking in Nigeria because policies only have a persuasive effect and not a binding effect. There is, therefore, a need for appropriate Laws for the regulation of cashless Banking in Nigeria. The study also concluded that there is a need to create more awareness of the system among Nigerians and solve infrastructural problems like prevalent power outage which often have been creating internet network problem.Keywords: cashless-banking, Nigeria, policies, laws
Procedia PDF Downloads 489