Search results for: multimodal fusion classifier
112 Reliability of Dry Tissues Sampled from Exhumed Bodies in DNA Analysis
Authors: V. Agostini, S. Gino, S. Inturri, A. Piccinini
Abstract:
In cases of corpse identification or parental testing performed on exhumed alleged dead father, usually, we seek and acquire organic samples as bones and/or bone fragments, teeth, nails and muscle’s fragments. The DNA analysis of these cadaveric matrices usually leads to identifying success, but it often happens that the results of the typing are not satisfactory with highly degraded, partial or even non-interpretable genetic profiles. To aggravate the interpretative panorama deriving from the analysis of such 'classical' organic matrices, we must add a long and laborious treatment of the sample that starts from the mechanical fragmentation up to the protracted decalcification phase. These steps greatly increase the chance of sample contamination. In the present work, instead, we want to report the use of 'unusual' cadaveric matrices, demonstrating that their forensic genetics analysis can lead to better results in less time and with lower costs of reagents. We report six case reports, result of on-field experience, in which eyeswabs and cartilage were sampled and analyzed, allowing to obtain clear single genetic profiles, useful for identification purposes. In all cases we used the standard DNA tissue extraction protocols (as reported on the user manuals of the manufacturers such as QIAGEN or Invitrogen- Thermo Fisher Scientific), thus bypassing the long and difficult phases of mechanical fragmentation and decalcification of bones' samples. PCR was carried out using PowerPlex® Fusion System kit (Promega), and capillary electrophoresis was carried out on an ABI PRISM® 310 Genetic Analyzer (Applied Biosystems®), with GeneMapper ID v3.2.1 (Applied Biosystems®) software. The software Familias (version 3.1.3) was employed for kinship analysis. The genetic results achieved have proved to be much better than the analysis of bones or nails, both from the qualitative and quantitative point of view and from the point of view of costs and timing. This way, by using the standard procedure of DNA extraction from tissue, it is possible to obtain, in a shorter time and with maximum efficiency, an excellent genetic profile, which proves to be useful and can be easily decoded for later paternity tests and/or identification of human remains.Keywords: DNA, eye swabs and cartilage, identification human remains, paternity testing
Procedia PDF Downloads 109111 Engineering C₃ Plants with SbtA, a Cyanobacterial Transporter, for Enhancing CO₂ Fixation
Authors: Vandana Deopanée Tomar, Gurpreet Kaur Sidhu, Panchsheela Nogia, Rajesh Mehrotra, Sandhya Mehrotra
Abstract:
The cyanobacterial CO₂ concentrating mechanism (CCM) operates to raise the levels of CO₂ in the vicinity of the main carboxylation enzyme Rubisco which is encapsulated in protein micro compartments called carboxysomes. Thus, due to the presence of CCM, cyanobacterial cells are able to work with high photosynthetic efficiency even at low Ci conditions and can accumulate 1000 folds high internal concentrations of Ci than external environment. Engineering of some useful CCM components into higher plants is one of the plausible approaches to improve their photosynthetic performance. The first step and the simplest approach for attaining this objective would be the transfer of cyanobacterial bicarbonate transporter such as SbtA to inner chloroplast envelope of C₃ plants. For this, SbtA transporter gene from Synechococcus elongatus PCC 7942 was fused to a transit peptide element to generate chimeric constructs in order to direct it to chloroplast inner envelope. Two transit peptides namely, TnaXTP (transit peptide from AT3G56160) and TMDTP (transit peptide from AT2G02590) were shortlisted from Arabidopsis thaliana genome and cloned in plant expression vector pCAMBIA1302 having mgfp5 as a reporter gene. Plant transformation was done by agro infiltration and Agrobacterium mediated co-culture. DNA, RNA, and protein were isolated from the leaves four days post infiltration, and the presence of transgene was confirmed by gene specific PCR (Polymerase Chain Reaction) analysis and by RT-PCR (Reverse Transcription Polymerase Chain Reaction). The expression was confirmed at the protein level by western blotting using anti-GFP primary antibody and horseradish peroxidase (HRP) conjugated secondary antibody. The localization of the protein was detected by confocal microscopy of isolated protoplasts. We observed chloroplastic expression for both the fusion constructs which suggest that the transit peptide sequences are capable of taking the cargo protein to the chloroplasts. These constructs are now being used to generate stable transgenic plants by Agrobacterium mediated transformation. The stability of transgene expression will be analyzed from T₀ to T₂ generation.Keywords: agro infiltration, bicarbonate transporter, carbon concentrating mechanisms, cyanobacteria, SbtA
Procedia PDF Downloads 221110 Analysis of the Brazilian Trade Balance in Relation to Mercosur: A Comparison between the Period 1989-1994 and 1994-2012
Authors: Luciana Aparecida Bastos, Tatiana Diair L. F. Rosa, Jesus Creapldi
Abstract:
The idea of Latin American integration occurred from the ideals of Simón Bolívar that, in 1824, called the Ibero-American nations to Amphictyonic Congress of Panama, on June 22, 1826, where he would defend the importance of Latin American unity. However, this congress was frustrating and the idea of Bolívar went no further. It was only after the European Union to start the process, driven by the end of World War II that the subject returned to emerge in Latin America. Thus, in 1960, supported by the European integration process, started in 1957 with the excellent result of the ECSC - European Coal and Steel Community, a result of the Customs Union of the BENELUX (integration between Belgium, the Netherlands and Luxembourg) in 1948, was created in Latin America, LAFTA - Latin American Free Trade Association, in 1960. In 1980, LAFTA was replaced by LAAI- Latin American Association, both with the same goal: to integrate Latin America, it´s economy and its trade. Most researchers in this period agree that the regional market would be expanded through the integration. The creation of one or more economic blocs in the region would provide the union of Latin American countries through a fusion of common interests and by their geographical proximity, which would try to develop common projects to promote mutual growth and economic development, tariff reductions, promotion of increased trade between, among many other goals set together. Thus, taking into account Mercosur, the main Latin-American block, created in 1994, the aim of this paper is to make a brief analysis of the trade balance performance of Brazil (larger economy of the block) in Mercosur in the periods: 1989-1994 and 1994-2012. The choice of this period was because the objective is to compare the period before and after the integration of Brazil in Mercosur. The methodologies used were the literature review and descriptive statistics. The results showed that after the integration of Brazil in Mercosur, the exports and imports grew within the bloc and the country turned out to become the leading importer of other economies of Mercosur after integration, that is, Brazil, after integration to Mercosur, was largely responsible for promoting the expansion of regional trade through the import of products from other members of the block.Keywords: Brazil, mercosur, integration, trade balance, comparison
Procedia PDF Downloads 324109 The Use of Intraarticular Aqueous Sarapin for Treatment of Chronic Knee Pain in Elderly Patients in a Primary Care Setting
Authors: Robert E. Kenney, Richard B. Aguilar, Efrain Antunez, Gregory Schor-Haskin, Rafael Rey, Catie Falcon, Luis Arce
Abstract:
This study sought to explore the effect of Sarapin injections on chronic knee pain (CKP). Many adults suffer from CKP which is most often attributed to osteoarthritis. Current treatment regimens for CKP involve the use NSAIDS medications, injections with steroids/analgesic, platelet rich plasma injections, or orthopedic surgical interventions. Sarapin is a commercially available homeopathic aqueous extract from the pitcher plant. Studies on the use of Sarapin as a treatment for cervical, thoracic, and lumbosacral facet joint nerve blocks have been performed with mixed results. There is little available evidence on the use of Sarapin in CKP. This study examines the effect of a series of 3 weekly injections of aqueous Sarapin in 95 elderly patients with CKP in a primary care setting. Cano Health, a primary care group, identified 95 successive patients with CKP from its multimodal physiotherapy program for chronic pain. Patients underwent evaluation by a clinician, underwent diagnostic Xrays of the knees, and the treatment plan with three weekly Sarapin injections was discussed. A pain and functional limitation survey (a modified Lower Extremity Functional Scale (mLEFS)) was administered prior to initiating treatment (Entry Survey (ES)). Each patient received an intraarticular injection of 2 cc of aqueous Sarapin with 1cc 1% lidocaine during weeks 1, 2 and 3. The mLEFS was administered again at week 4, one week after the third Sarapin injection (Exit Survey (ExS)). Demographics: Mean Age 62 +/- 9.8; 73% female; 89% Hispanic/Latino; mean time between ES and ExS was 27.5 +/-8.2 days. Survey: The mLEFS was based on a published Lower Extremity Functional Scale and each patient rated their pain or functional limitation from 0 (no difficulty) to 5 (severe difficulty) for 10 questions. Answers were summed and compared. Maximum score for severe difficulty would be 50 points. Results: Mean pain/functional scores: ES was 30.3 +/-12.1 and ExS was 19.5 +/- 12.5. This represents a relative improvement of 35.7% (P<0.00001). A total of 81% (77/95) of the patients showed improvement in symptoms at week four as assessed by the mLEFS. There were 11 patients who reported an increase in their survey scores while 7 patients reported no change. When evaluating the cohort that reported improvement, the ES was 30.9 +/-11.4 and ExS was 16.3 +/-9.8 yielding a 47.2% relative improvement (P<0.00001). Injections were well tolerated, and no adverse events were reported. Conclusions: In this cohort of 95 elderly patients with CKP, treatment with 3 weekly injections of Sarapin significantly improved pain and function as assessed by a mLEFS survey. The majority (81%) of patients responded positively to therapy, 12% had worsening symptoms and 7% reported no change. The use of intraarticular injections of Sarapin for CKP was shown to be an effective modality of treatment. Sarapin’s low cost, tolerability, and ease of use make it an attractive alternative to NSAIDS, steroids, PRP or surgical intervention for this common debilitating condition.Keywords: Sarapin, intraarticular, chronic knee pain, osteoarthritis
Procedia PDF Downloads 89108 Enhancing Archaeological Sites: Interconnecting Physically and Digitally
Authors: Eleni Maistrou, D. Kosmopoulos, Carolina Moretti, Amalia Konidi, Katerina Boulougoura
Abstract:
InterArch is an ongoing research project that has been running since September 2020. It aims to propose the design of a site-based digital application for archaeological sites and outdoor guided tours, supporting virtual and augmented reality technology. The research project is co‐financed by the European Union and Greek national funds, through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE – INNOVATE (project code: Τ2ΕΔΚ-01659). It involves mutual collaboration between academic and cultural institutions and the contribution of an IT applications development company. The research will be completed by July 2023 and will run as a pilot project for the city of Ancient Messene, a place of outstanding natural beauty in the west of Peloponnese, which is considered one of the most important archaeological sites in Greece. The applied research project integrates an interactive approach to the natural environment, aiming at a manifold sensory experience. It combines the physical space of the archaeological site with the digital space of archaeological and cultural data while at the same time, it embraces storytelling processes by engaging an interdisciplinary approach that familiarizes the user with multiple semantic interpretations. The mingling of the real-world environment with its digital and cultural components by using augmented reality techniques could potentially transform the visit on-site into an immersive multimodal sensory experience. To this purpose, an extensive spatial analysis along with a detailed evaluation of the existing digital and non-digital archives is proposed in our project, intending to correlate natural landscape morphology (including archaeological material remains and environmental characteristics) with the extensive historical records and cultural digital data. On-site research was carried out, during which visitors’ itineraries were monitored and tracked throughout the archaeological visit using GPS locators. The results provide our project with useful insight concerning the way visitors engage and interact with their surroundings, depending on the sequence of their itineraries and the duration of stay at each location. InterArch aims to propose the design of a site-based digital application for archaeological sites and outdoor guided tours, supporting virtual and augmented reality technology. Extensive spatial analysis, along with a detailed evaluation of the existing digital and non-digital archives, is used in our project, intending to correlate natural landscape morphology with the extensive historical records and cultural digital data. The results of the on-site research provide our project with useful insight concerning the way visitors engage and interact with their surroundings, depending on the sequence of their itineraries and the duration of stay at each location.Keywords: archaeological site, digital space, semantic interpretations, cultural heritage
Procedia PDF Downloads 72107 Case Report: Opioid Sparing Anaesthesia with Dexmedetomidine in General Surgery
Authors: Shang Yee Chong
Abstract:
Perioperative pain is a complex mechanism activated by various nociceptive, neuropathic, and inflammatory pathways. Opioids have long been a mainstay for analgesia in this period, even as we are continuously moving towards a multimodal model to improve pain control while minimising side effects. Dexmedetomidine, a potent alpha-2 agonist, is a useful sedative and hypnotic agent. Its use in the intensive care unit has been well described, and it is increasingly an adjunct intraoperatively for its opioid sparing effects and to decrease pain scores. We describe a case of a general surgical patient in whom minimal opioids was required with dexmedetomidine use. The patient was a 61-year-old Indian gentleman with a history of hyperlipidaemia and type 2 diabetes mellitus, presenting with rectal adenocarcinoma detected on colonoscopy. He was scheduled for a robotic ultra-low anterior resection. The patient was induced with intravenous fentanyl 75mcg, propofol 160mg and atracurium 40mg. He was intubated conventionally and mechanically ventilated. Anaesthesia was maintained with inhalational desflurane and anaesthetic depth was measured with the Masimo EEG Sedline brain function monitor. An initial intravenous dexmedetomidine dose (bolus) of 1ug/kg for 10 minutes was given prior to anaesthetic induction and thereafter, an infusion of 0.2-0.4ug/kg/hr to the end of surgery. In addition, a bolus dose of intravenous lignocaine 1.5mg/kg followed by an infusion at 1mg/kg/hr throughout the surgery was administered. A total of 10mmol of magnesium sulphate and intravenous paracetamol 1000mg were also given for analgesia. There were no significant episodes of bradycardia or hypotension. A total of intravenous phenylephrine 650mcg was given throughout to maintain the patient’s mean arterial pressure within 10-15mmHg of baseline. The surgical time lasted for 5 hours and 40minutes. Postoperatively the patient was reversed and extubated successfully. He was alert and comfortable and pain scores were minimal in the immediate post op period in the postoperative recovery unit. Time to first analgesia was 4 hours postoperatively – with paracetamol 1g administered. This was given at 6 hourly intervals strictly for 5 days post surgery, along with celecoxib 200mg BD as prescribed by the surgeon regardless of pain scores. Oral oxycodone was prescribed as a rescue analgesic for pain scores > 3/10, but the patient did not require any dose. Neither was there nausea or vomiting. The patient was discharged on postoperative day 5. This case has reinforced the use of dexmedetomidine as an adjunct in general surgery cases, highlighting its excellent opioid-sparing effects. In the entire patient’s hospital stay, the only dose of opioid he received was 75mcg of fentanyl at the time of anaesthetic induction. The patient suffered no opioid adverse effects such as nausea, vomiting or postoperative ileus, and pain scores varied from 0-2/10. However, intravenous lignocaine infusion was also used in this instance, which would have helped improve pain scores. Paracetamol, lignocaine, and dexmedetomidine is thus an effective, opioid-sparing combination of multi-modal analgesia for major abdominal surgery cases.Keywords: analgesia, dexmedetomidine, general surgery, opioid sparing
Procedia PDF Downloads 136106 Innovations and Challenges: Multimodal Learning in Cybersecurity
Authors: Tarek Saadawi, Rosario Gennaro, Jonathan Akeley
Abstract:
There is rapidly growing demand for professionals to fill positions in Cybersecurity. This is recognized as a national priority both by government agencies and the private sector. Cybersecurity is a very wide technical area which encompasses all measures that can be taken in an electronic system to prevent criminal or unauthorized use of data and resources. This requires defending computers, servers, networks, and their users from any kind of malicious attacks. The need to address this challenge has been recognized globally but is particularly acute in the New York metropolitan area, home to some of the largest financial institutions in the world, which are prime targets of cyberattacks. In New York State alone, there are currently around 57,000 jobs in the Cybersecurity industry, with more than 23,000 unfilled positions. The Cybersecurity Program at City College is a collaboration between the Departments of Computer Science and Electrical Engineering. In Fall 2020, The City College of New York matriculated its first students in theCybersecurity Master of Science program. The program was designed to fill gaps in the previous offerings and evolved out ofan established partnership with Facebook on Cybersecurity Education. City College has designed a program where courses, curricula, syllabi, materials, labs, etc., are developed in cooperation and coordination with industry whenever possible, ensuring that students graduating from the program will have the necessary background to seamlessly segue into industry jobs. The Cybersecurity Program has created multiple pathways for prospective students to obtain the necessary prerequisites to apply in order to build a more diverse student population. The program can also be pursued on a part-time basis which makes it available to working professionals. Since City College’s Cybersecurity M.S. program was established to equip students with the advanced technical skills needed to thrive in a high-demand, rapidly-evolving field, it incorporates a range of pedagogical formats. From its outset, the Cybersecurity program has sought to provide both the theoretical foundations necessary for meaningful work in the field along with labs and applied learning projects aligned with skillsets required by industry. The efforts have involved collaboration with outside organizations and with visiting professors designing new courses on topics such as Adversarial AI, Data Privacy, Secure Cloud Computing, and blockchain. Although the program was initially designed with a single asynchronous course in the curriculum with the rest of the classes designed to be offered in-person, the advent of the COVID-19 pandemic necessitated a move to fullyonline learning. The shift to online learning has provided lessons for future development by providing examples of some inherent advantages to the medium in addition to its drawbacks. This talk will address the structure of the newly-implemented Cybersecurity Master’s Program and discuss the innovations, challenges, and possible future directions.Keywords: cybersecurity, new york, city college, graduate degree, master of science
Procedia PDF Downloads 148105 Assessing Online Learning Paths in an Learning Management Systems Using a Data Mining and Machine Learning Approach
Authors: Alvaro Figueira, Bruno Cabral
Abstract:
Nowadays, students are used to be assessed through an online platform. Educators have stepped up from a period in which they endured the transition from paper to digital. The use of a diversified set of question types that range from quizzes to open questions is currently common in most university courses. In many courses, today, the evaluation methodology also fosters the students’ online participation in forums, the download, and upload of modified files, or even the participation in group activities. At the same time, new pedagogy theories that promote the active participation of students in the learning process, and the systematic use of problem-based learning, are being adopted using an eLearning system for that purpose. However, although there can be a lot of feedback from these activities to student’s, usually it is restricted to the assessments of online well-defined tasks. In this article, we propose an automatic system that informs students of abnormal deviations of a 'correct' learning path in the course. Our approach is based on the fact that by obtaining this information earlier in the semester, may provide students and educators an opportunity to resolve an eventual problem regarding the student’s current online actions towards the course. Our goal is to prevent situations that have a significant probability to lead to a poor grade and, eventually, to failing. In the major learning management systems (LMS) currently available, the interaction between the students and the system itself is registered in log files in the form of registers that mark beginning of actions performed by the user. Our proposed system uses that logged information to derive new one: the time each student spends on each activity, the time and order of the resources used by the student and, finally, the online resource usage pattern. Then, using the grades assigned to the students in previous years, we built a learning dataset that is used to feed a machine learning meta classifier. The produced classification model is then used to predict the grades a learning path is heading to, in the current year. Not only this approach serves the teacher, but also the student to receive automatic feedback on her current situation, having past years as a perspective. Our system can be applied to online courses that integrate the use of an online platform that stores user actions in a log file, and that has access to other student’s evaluations. The system is based on a data mining process on the log files and on a self-feedback machine learning algorithm that works paired with the Moodle LMS.Keywords: data mining, e-learning, grade prediction, machine learning, student learning path
Procedia PDF Downloads 123104 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 109103 Cooperation of Unmanned Vehicles for Accomplishing Missions
Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin
Abstract:
The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning
Procedia PDF Downloads 129102 Radiofrequency and Near-Infrared Responsive Core-Shell Multifunctional Nanostructures Using Lipid Templates for Cancer Theranostics
Authors: Animesh Pan, Geoffrey D. Bothun
Abstract:
With the development of nanotechnology, research in multifunctional delivery systems has a new pace and dimension. An incipient challenge is to design an all-in-one delivery system that can be used for multiple purposes, including tumor targeting therapy, radio-frequency (RF-), near-infrared (NIR-), light-, or pH-induced controlled release, photothermal therapy (PTT), photodynamic therapy (PDT), and medical diagnosis. In this regard, various inorganic nanoparticles (NPs) are known to show great potential as the 'functional components' because of their fascinating and tunable physicochemical properties and the possibility of multiple theranostic modalities from individual NPs. Magnetic, luminescent, and plasmonic properties are the three most extensively studied and, more importantly biomedically exploitable properties of inorganic NPs. Although successful attempts of combining any two of them above mentioned functionalities have been made, integrating them in one system has remained challenge. Keeping those in mind, controlled designs of complex colloidal nanoparticle system are one of the most significant challenges in nanoscience and nanotechnology. Therefore, systematic and planned studies providing better revelation are demanded. We report a multifunctional delivery platform-based liposome loaded with drug, iron-oxide magnetic nanoparticles (MNPs), and a gold shell on the surface of liposomes, were synthesized using a lipid with polyelectrolyte (layersomes) templating technique. MNPs and the anti-cancer drug doxorubicin (DOX) were co-encapsulated inside liposomes composed by zwitterionic phophatidylcholine and anionic phosphatidylglycerol using reverse phase evaporation (REV) method. The liposomes were coated with positively charge polyelectrolyte (poly-L-lysine) to enrich the interface with gold anion, exposed to a reducing agent to form a gold nanoshell, and then capped with thio-terminated polyethylene glycol (SH-PEG2000). The core-shell nanostructures were characterized by different techniques like; UV-Vis/NIR scanning spectrophotometer, dynamic light scattering (DLS), transmission electron microscope (TEM). This multifunctional system achieves a variety of functions, such as radiofrequency (RF)-triggered release, chemo-hyperthermia, and NIR laser-triggered for photothermal therapy. Herein, we highlight some of the remaining major design challenges in combination with preliminary studies assessing therapeutic objectives. We demonstrate an efficient loading and delivery system to significant cell death of human cancer cells (A549) with therapeutic capabilities. Coupled with RF and NIR excitation to the doxorubicin-loaded core-shell nanostructure helped in securing targeted and controlled drug release to the cancer cells. The present core-shell multifunctional system with their multimodal imaging and therapeutic capabilities would be eminent candidates for cancer theranostics.Keywords: cancer thernostics, multifunctional nanostructure, photothermal therapy, radiofrequency targeting
Procedia PDF Downloads 128101 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 491100 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets
Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah
Abstract:
Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs
Procedia PDF Downloads 48899 Improving the Bioprocess Phenotype of Chinese Hamster Ovary Cells Using CRISPR/Cas9 and Sponge Decoy Mediated MiRNA Knockdowns
Authors: Kevin Kellner, Nga Lao, Orla Coleman, Paula Meleady, Niall Barron
Abstract:
Chinese Hamster Ovary (CHO) cells are the prominent cell line used in biopharmaceutical production. To improve yields and find beneficial bioprocess phenotypes genetic engineering plays an essential role in recent research. The miR-23 cluster, specifically miR-24 and miR-27, was first identified as differentially expressed during hypothermic conditions suggesting a role in proliferation and productivity in CHO cells. In this study, we used sponge decoy technology to stably deplete the miRNA expression of the cluster. Furthermore, we implemented the CRISPR/Cas9 system to knockdown miRNA expression. Sponge constructs were designed for an imperfect binding of the miRNA target, protecting from RISC mediated cleavage. GuideRNAs for the CRISPR/Cas9 system were designed to target the seed region of the miRNA. The expression of mature miRNA and precursor were confirmed using RT-qPCR. For both approaches stable expressing mixed populations were generated and characterised in batch cultures. It was shown, that CRISPR/Cas9 can be implemented in CHO cells with achieving high knockdown efficacy of every single member of the cluster. Targeting of one miRNA member showed that its genomic paralog is successfully targeted as well. The stable depletion of miR-24 using CRISPR/Cas9 showed increased growth and specific productivity in a CHO-K1 mAb expressing cell line. This phenotype was further characterized using quantitative label-free LC-MS/MS showing 186 proteins differently expressed with 19 involved in proliferation and 26 involved in protein folding/translation. Targeting miR-27 in the same cell line showed increased viability in late stages of the culture compared to the control. To evaluate the phenotype in an industry relevant cell line; the miR-23 cluster, miR-24 and miR-27 were stably depleted in a Fc fusion CHO-S cell line which showed increased batch titers up to 1.5-fold. In this work, we highlighted that the stable depletion of the miR-23 cluster and its members can improve the bioprocess phenotype concerning growth and productivity in two different cell lines. Furthermore, we showed that using CRISPR/Cas9 is comparable to the traditional sponge decoy technology.Keywords: Chinese Hamster ovary cells, CRISPR/Cas9, microRNAs, sponge decoy technology
Procedia PDF Downloads 20098 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 36597 The Taste of Macau: An Exploratory Study of Destination Food Image
Authors: Jianlun Zhang, Christine Lim
Abstract:
Local food is one of the most attractive elements to tourists. The role of local cuisine in destination branding is very important because it is the distinctive identity that helps tourists remember the destination. The objectives of this study are: (1) Test the direct relation between the cognitive image of destination food and tourists’ intention to eat local food. (2) Examine the mediating effect of tourists’ desire to try destination food on the relationship between the cognitive image of local food and tourists’ intention to eat destination food. (3) Study the moderating effect of tourists’ perceived difficulties in finding local food on the relationship between tourists’ desire to try destination food and tourists’ intention to eat local food. To achieve the goals of this study, Macanese cuisine is selected as the destination food. Macau is located in Southeastern China and is a former colonial city of Portugal. The taste and texture of Macanese cuisine are unique because it is a fusion of cuisine from many countries and regions of mainland China. As people travel to seek authentically exotic experience, it is important to investigate if the food image of Macau leaves a good impression on tourists and motivate them to try local cuisine. A total of 449 Chinese tourists were involved in this study. To analyze the data collected, partial least square-structural equation modelling (PLS-SEM) technique is employed. Results suggest that the cognitive image of Macanese cuisine has a direct effect on tourists’ intention to eat Macanese cuisine. Tourists’ desire to try Macanese cuisine mediates the cognitive image-intention relationship. Tourists’ perceived difficulty of finding Macanese cuisine moderates the desire-intention relationship. The lower tourists’ perceived difficulty in finding Macanese cuisine is, the stronger the desire-intention relationship it will be. There are several practical implications of this study. First, the government tourism website can develop an authentic storyline about the evolvement of local cuisine, which provides an opportunity for tourists to taste the history of the destination and create a novel experience for them. Second, the government should consider the development of food events, restaurants, and hawker businesses. Third, to lower tourists’ perceived difficulty in finding local cuisine, there should be locations of restaurants and hawker stalls with clear instructions for finding them on the websites of the government tourism office, popular tourism sites, and public transportation stations in the destination. Fourth, in the post-COVID-19 era, travel risk will be a major concern for tourists. Therefore, when promoting local food, the government tourism website should post images that show food safety and hygiene.Keywords: cognitive image of destination food, desire to try destination food, intention to eat food in the destination, perceived difficulties of finding local cuisine, PLS-SEM
Procedia PDF Downloads 19096 Assessing Sustainability of Bike Sharing Projects Using Envision™ Rating System
Authors: Tamar Trop
Abstract:
Bike sharing systems can be important elements of smart cities as they have the potential for impact on multiple levels. These systems can add a significant alternative to other modes of mass transit in cities that are continuously looking for measures to become more livable and maintain their attractiveness for citizens, businesses and tourism. Bike-sharing began in Europe in 1965, and a viable format emerged in the mid-2000s thanks to the introduction of information technology. The rate of growth in bike-sharing schemes and fleets has been very rapid since 2008 and has probably outstripped growth in every other form of urban transport. Today, public bike-sharing systems are available on five continents, including over 700 cities, operating more than 800,000 bicycles at approximately 40,000 docking stations. Since modern bike sharing systems have become prevalent only in the last decade, the existing literature analyzing these systems and their sustainability is relatively new. The purpose of the presented study is to assess the sustainability of these newly emerging transportation systems, by using the Envision™ rating system as a methodological framework and the Israeli 'Tel -O-Fun' – bike sharing project as a case study. The assessment was conducted by project team members. Envision™ is a new guidance and rating system used to assess and improve the sustainability of all types and sizes of infrastructure projects. This tool provides a holistic framework for evaluating and rating the community, environmental, and economic benefits of infrastructure projects over the course of their life cycle. This evaluation method has 60 sustainability criteria divided into five categories: Quality of life, leadership, resource allocation, natural world, and climate and risk. 'Tel -O-Fun' project was launched in Tel Aviv-Yafo on 2011 and today provides about 1,800 bikes for rent, at 180 rental stations across the city. The system is based on a complex computer terminal that is located in the docking stations. The highest-rated sustainable features that the project scored include: (a) Improving quality of life by: offering a low cost and efficient form of public transit, improving community mobility and access, enabling the flexibility of travel within a multimodal transportation system, saving commuters time and money, enhancing public health and reducing air and noise pollution; (b) improving resource allocation by: offering inexpensive and flexible last-mile connectivity, reducing space, materials and energy consumption, reducing wear and tear on public roads, and maximizing the utility of existing infrastructure, and (c) reducing of greenhouse gas emissions from transportation. Overall, 'Tel -O-Fun' project was highly scored as an environmentally sustainable and socially equitable infrastructure. The use of this practical framework for evaluation also yielded various interesting insights on the shortcoming of the system and the characteristics of good solutions. This can contribute to the improvement of the project and may assist planners and operators of bike sharing systems to develop a sustainable, efficient and reliable transportation infrastructure within smart cities.Keywords: bike sharing, Envision™, sustainability rating system, sustainable infrastructure
Procedia PDF Downloads 34195 Longitudinal impact on Empowerment for Ugandan Women with Post-Primary Education
Authors: Shelley Jones
Abstract:
Assumptions abound that education for girls will, as a matter of course, lead to their economic empowerment as women; yet. little is known about the ways in which schooling for girls, who traditionally/historically would not have had opportunities for post-primary, or perhaps even primary education – such as the participants in this study based in rural Uganda - in reality, impacts their economic situations. There is a need forlongitudinal studies in which women share experiences, understandings, and reflections of their lives that can inform our knowledge of this. In response, this paper reports on stage four of a longitudinal case study (2004-2018) focused on education and empowerment for girls and women in rural Uganda, in which 13 of the 15 participants from the original study participated. This paper understands empowerment as not simply increased opportunities (e.g., employment) but also real gains in power, freedoms that enable agentive action, and authentic and viable choices/alternatives that offer ‘exit options’ from unsatisfactory situations. As with the other stages, this study used a critical, postmodernist, global feminist ethnographic methodology, multimodal and qualitative data collection. Participants participated in interviews, focus group discussions, and a two-day workshop, which explored their understandings of how/if they understood post-primary education to have contributed to their economic empowerment. A constructivist grounded theory approach was used for data analysis to capture major themes. Findings indicate that although all participants believe that post-primary education provided them with economic opportunities they would not have had otherwise, the parameters of their economic empowerment were severely constrained by historic and extant sociocultural, economic, political, and institutional structures that continue to disempower girls and women, as well as additional financial responsibilities that they assumed to support others. Even though the participants had post-primary education, and they were able to obtain employment or operate their own businesses that they would not likely have been able to do without post-primary education, the majority of the participants’ incomes were not sufficient to elevate them financially above the extreme poverty level, especially as many were single mothers and the sole income earners in their households. Furthermore, most deemed their working conditions unsatisfactory and their positions precarious; they also experienced sexual harassment and abuse in the labour force. Additionally, employment for the participants resulted in a double work burden: long days at work, surrounded by many hours of domestic work at home (which, even if they had spousal partners, still fell almost exclusively to women). In conclusion, although the participants seem to have experienced some increase in economic empowerment, largely due to skills, knowledge, and qualifications gained at the post-primary level, numerous barriers prevented them from maximizing their capabilities and making significant gains in empowerment. There is need, in addition to providing education (primary, secondary, and tertiary) to girls, to address systemic gender inequalities that mitigate against women’s empowerment, as well as opportunities and freedom for women to come together and demand fair pay, reasonable working conditions, and benefits, freedom from gender-based harassment and assault in the workplace, as well as advocate for equal distribution of domestic work as a cultural change.Keywords: girls' post-primary education, women's empowerment, uganda, employment
Procedia PDF Downloads 14894 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads
Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li
Abstract:
Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability
Procedia PDF Downloads 22593 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 17592 The Integration of Apps for Communicative Competence in English Teaching
Authors: L. J. de Jager
Abstract:
In the South African English school curriculum, one of the aims is to achieve communicative competence, the knowledge of using language competently and appropriately in a speech community. Communicatively competent speakers should not only produce grammatically correct sentences but also produce contextually appropriate sentences for various purposes and in different situations. As most speakers of English are non-native speakers, achieving communicative competence remains a complex challenge. Moreover, the changing needs of society necessitate not merely language proficiency, but also technological proficiency. One of the burning issues in the South African educational landscape is the replacement of the standardised literacy model by the pedagogy of multiliteracies that incorporate, by default, the exploration of technological text forms that are part of learners’ everyday lives. It foresees learners as decoders, encoders, and manufacturers of their own futures by exploiting technological possibilities to constantly create and recreate meaning. As such, 21st century learners will feel comfortable working with multimodal texts that are intrinsically part of their lives and by doing so, become authors of their own learning experiences while teachers may become agents supporting learners to discover their capacity to acquire new digital skills for the century of multiliteracies. The aim is transformed practice where learners use their skills, ideas, and knowledge in new contexts. This paper reports on a research project on the integration of technology for language learning, based on the technological pedagogical content knowledge framework, conceptually founded in the theory of multiliteracies, and which aims to achieve communicative competence. The qualitative study uses the community of inquiry framework to answer the research question: How does the integration of technology transform language teaching of preservice teachers? Pre-service teachers in the Postgraduate Certificate of Education Programme with English as methodology were purposively selected to source and evaluate apps for teaching and learning English. The participants collaborated online in a dedicated Blackboard module, using discussion threads to sift through applicable apps and develop interactive lessons using the Apps. The selected apps were entered on to a predesigned Qualtrics form. Data from the online discussions, focus group interviews, and reflective journals were thematically and inductively analysed to determine the participants’ perceptions and experiences when integrating technology in lesson design and the extent to which communicative competence was achieved when using these apps. Findings indicate transformed practice among participants and research team members alike with a better than average technology acceptance and integration. Participants found value in online collaboration to develop and improve their own teaching practice by experiencing directly the benefits of integrating e-learning into the teaching of languages. It could not, however, be clearly determined whether communicative competence was improved. The findings of the project may potentially inform future e-learning activities, thus supporting student learning and development in follow-up cycles of the project.Keywords: apps, communicative competence, English teaching, technology integration, technological pedagogical content knowledge
Procedia PDF Downloads 16891 Exploring the Concept of Fashion Waste: Hanging by a Thread
Authors: Timothy Adam Boleratzky
Abstract:
The goal of this transformative endeavour lies in the repurposing of textile scraps, heralding a renaissance in the creation of wearable art. Through a judicious fusion of Life Cycle Assessment (LCA) methodologies and cutting-edge techniques, this research embarks upon a voyage of exploration, unraveling the intricate tapestry of environmental implications woven into the fabric of textile waste. Delving deep into the annals of empirical evidence and scholarly discourse, the study not only elucidates the urgent imperative for waste reduction strategies but also unveils the transformative potential inherent in embracing circular economy principles within the hallowed halls of fashion. As the research unfurls its sails, guided by the compass of sustainability, it traverses uncharted territories, charting a course toward a more enlightened and responsible fashion ecosystem. The canvas upon which this journey unfolds is richly adorned with insights gleaned from the crucible of experimentation, laying bare the myriad pathways toward waste minimisation and resource optimisation. From the adoption of recycling strategies to the cultivation of eco-friendly production techniques, the research endeavours to sculpt a blueprint for a more sustainable future, one stitch at a time. In this unfolding narrative, the role of wearable art emerges as a potent catalyst for change, transcending the boundaries of conventional fashion to embrace a more holistic ethos of sustainability. Through the alchemy of creativity and craftsmanship, discarded textile scraps are imbued with new life, morphing into exquisite creations that serve as both a testament to human ingenuity and a rallying cry for environmental preservation. Each thread, each stitch, becomes a silent harbinger of change, weaving together a tapestry of hope in a world besieged by ecological uncertainty. As the research journey culminates, its echoes resonate far beyond the confines of academia, reverberating through the corridors of industry and beyond. In its wake, it leaves a legacy of empowerment and enlightenment, inspiring a generation of designers, entrepreneurs, and consumers to embrace a more sustainable vision of fashion. For in the intricate interplay of threads and textiles lies the promise of a brighter, more resilient future, where beauty coexists harmoniously with responsibility and where fashion becomes not merely an expression of style but a celebration of sustainability.Keywords: fabric-manipulation, sustainability, textiles, waste, wearable-art
Procedia PDF Downloads 4690 Resolving Urban Mobility Issues through Network Restructuring of Urban Mass Transport
Authors: Aditya Purohit, Neha Bansal
Abstract:
Unplanned urbanization and multidirectional sprawl of the cities have resulted in increased motorization and deteriorating transport conditions like traffic congestion, longer commuting, pollution, increased carbon footprint, and above all increased fatalities. In order to overcome these problems, various practices have been adopted including– promoting and implementing mass transport; traffic junction channelization; smart transport etc. However, these methods are found to be primarily focusing on vehicular mobility rather than people accessibility. With this research gap, this paper tries to resolve the mobility issues for Ahmedabad city in India, which being the economic capital Gujarat state has a huge commuter and visitor inflow. This research aims to resolve the traffic congestion and urban mobility issues focusing on Gujarat State Regional Transport Corporation (GSRTC) for the city of Ahmadabad by analyzing the existing operations and network structure of GSRTC followed by finding possibilities of integrating it with other modes of urban transport. The network restructuring (NR) methodology is used with appropriate variations, based on commuter demand and growth pattern of the city. To do these ‘scenarios’ based on priority issues (using 12 parameters) and their best possible solution, are established after route network analysis for 2700 population sample of 20 traffic junctions/nodes across the city. Approximately 5% sample (of passenger inflow) at each node is considered using random stratified sampling technique two scenarios are – Scenario 1: Resolving mobility issues by use of Special Purpose Vehicle (SPV) in joint venture to GSRTC and Private Operators for establishing feeder service, which shall provide a transfer service for passenger for movement from inner city area to identified peripheral terminals; and Scenario 2: Augmenting existing mass transport services such as BRTS and AMTS for using them as feeder service to the identified peripheral terminals. Each of these has now been analyzed for the best suitability/feasibility in network restructuring. A desire-line diagram is constructed using this analysis which indicated that on an average 62% of designated GSRTC routes are overlapping with mass transportation service routes of BRTS and AMTS in the city. This has resulted in duplication of bus services causing traffic congestion especially in the Central Bus Station (CBS). Terminating GSRTC services on the periphery of the city is found to be the best restructuring network proposal. This limits the GSRTC buses at city fringe area and prevents them from entering into the city core areas. These end-terminals of GSRTC are integrated with BRTS and AMTS services which help in segregating intra-state and inter-state bus services. The research concludes that absence of integrated multimodal transport network resulted in complexity of transport access to the commuters. As a further scope of research comparing and understanding of value of access time in total travel time and its implication on generalized cost on trip and how it varies city wise may be taken up.Keywords: mass transportation, multi-modal integration, network restructuring, travel behavior, urban transport
Procedia PDF Downloads 19889 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 14188 Field Environment Sensing and Modeling for Pears towards Precision Agriculture
Authors: Tatsuya Yamazaki, Kazuya Miyakawa, Tomohiko Sugiyama, Toshitaka Iwatani
Abstract:
The introduction of sensor technologies into agriculture is a necessary step to realize Precision Agriculture. Although sensing methodologies themselves have been prevailing owing to miniaturization and reduction in costs of sensors, there are some difficulties to analyze and understand the sensing data. Targeting at pears ’Le Lectier’, which is particular to Niigata in Japan, cultivation environmental data have been collected at pear fields by eight sorts of sensors: field temperature, field humidity, rain gauge, soil water potential, soil temperature, soil moisture, inner-bag temperature, and inner-bag humidity sensors. With regard to the inner-bag temperature and humidity sensors, they are used to measure the environment inside the fruit bag used for pre-harvest bagging of pears. In this experiment, three kinds of fruit bags were used for the pre-harvest bagging. After over 100 days continuous measurement, volumes of sensing data have been collected. Firstly, correlation analysis among sensing data measured by respective sensors reveals that one sensor can replace another sensor so that more efficient and cost-saving sensing systems can be proposed to pear farmers. Secondly, differences in characteristic and performance of the three kinds of fruit bags are clarified by the measurement results by the inner-bag environmental sensing. It is found that characteristic and performance of the inner-bags significantly differ from each other by statistical analysis. Lastly, a relational model between the sensing data and the pear outlook quality is established by use of Structural Equation Model (SEM). Here, the pear outlook quality is related with existence of stain, blob, scratch, and so on caused by physiological impair or diseases. Conceptually SEM is a combination of exploratory factor analysis and multiple regression. By using SEM, a model is constructed to connect independent and dependent variables. The proposed SEM model relates the measured sensing data and the pear outlook quality determined on the basis of farmer judgement. In particularly, it is found that the inner-bag humidity variable relatively affects the pear outlook quality. Therefore, inner-bag humidity sensing might help the farmers to control the pear outlook quality. These results are supported by a large quantity of inner-bag humidity data measured over the years 2014, 2015, and 2016. The experimental and analytical results in this research contribute to spreading Precision Agriculture technologies among the farmers growing ’Le Lectier’.Keywords: precision agriculture, pre-harvest bagging, sensor fusion, structural equation model
Procedia PDF Downloads 31487 Examining the Links between Fish Behaviour and Physiology for Resilience in the Anthropocene
Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts
Abstract:
Changes in behaviour and physiology are the most important responses of marine life to anthropogenic impacts such as climate change and over-fishing. Behavioural changes (such as a shift in distribution or changes in phenology) can ensure that a species remains in an environment suited for its optimal physiological performance. However, if marine life is unable to shift their distribution, they are reliant on physiological adaptation (either by broadening their metabolic curves to tolerate a range of stressors or by shifting their metabolic curves to maximize their performance at extreme stressors). However, since there are links between fish physiology and behaviour, changes to either of these traits may have reciprocal interactions. This paper reviews the current knowledge of the links between the behaviour and physiology of fishes, discusses these in the context of exploitation and climate change, and makes recommendations for future research needs. The review revealed that our understanding of the links between fish behaviour and physiology is rudimentary. However, both are hypothesized to be linked to stress responses along the hypothalamic pituitary axis. The link between physiological capacity and behaviour is particularly important as both determine the response of an individual to a changing climate and are under selection by fisheries. While it appears that all types of capture fisheries are likely to reduce the adaptive potential of fished populations to climate stressors, angling, which is primarily associated with recreational fishing, may induce fission of natural populations by removing individuals with bold behavioural traits and potentially the physiological traits required to facilitate behavioural change. Future research should focus on assessing how the links between physiological capacity and behaviour influence catchability, the response to climate change drivers, and post-release recovery. The plasticity of phenotypic traits should be examined under a range of stressors of differing intensity in several species and life history stages. Future studies should also assess plasticity (fission or fusion) in the phenotypic structuring of social hierarchy and how this influences habitat selection. Ultimately, to fully understand how physiology is influenced by the selective processes driven by fisheries, long-term monitoring of the physiological and behavioural structure of fished populations, their fitness, and catch rates are required.Keywords: climate change, metabolic shifts, over-fishing, phenotypic plasticity, stress response
Procedia PDF Downloads 11886 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine
Procedia PDF Downloads 1285 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers
Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya
Abstract:
In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.Keywords: IVF, embryo, machine learning, time-lapse imaging data
Procedia PDF Downloads 9384 The Importance of Anthropometric Indices for Assessing the Physical Development and Physical Fitness of Young Athletes
Authors: Akbarova Gulnozakhon
Abstract:
Relevance. Physical exercises can prolong the function of the growth zones of long tubular bones, delay the fusion of the epiphyses and diaphyses of bones and, thus, increase the growth of the body. At the same time, intensive strength exercises can accelerate the process of ossification of bone growth zones and slow down their growth in length. The influence of physical exercises on the process of biological maturation is noted. Gymnastics, which requires intense speed and strength loads, delays puberty. On the other hand, it is indicated that the relatively slow puberty of gymnasts is associated with the selection of girls with a special somatotype in this sport. It was found that the later onset of menstruation in female athletes does not have a negative effect on the maturation process and fertility (the ability to procreate). Observations are made about the normalizing influence of sports on the puberty of girls. The purpose of the study. Our goal is to study physical activity of varying intensity on the formation of secondary sexual characteristics and hormonal status of girls in adolescence. Each biological process peculiar to a given organism is not in a stationary state, but fluctuates with a certain frequency. According to the duration, there are, for example, circadian cycles, and infradian cycles, a typical example of which is the menstrual cycle. Materials and methods, results. Violations of menstrual function in athletes were detected by applying a questionnaire survey that contains several paragraphs and sub-paragraphs where passport data, anthropometric indicators, taking into account anthropometric indices, information about the menstrual cycle are indicated. Of 135 female athletes aged 1-3 to 16 years engaged in various sports - gymnasts, menstrual function disorders were noted in 86.7% (primary or secondary amenorrhea, irregular MC), in swimming-in 57.1%. The general condition also changes during the menstrual cycle. In a large percentage of cases, athletes indicate an increase in irritability in the premenstrual (45%) and menstrual (36%) phases. During these phases, girls note an increase in fatigue of 46.5% and 58% (respectively). In girls, secondary sexual characteristics continue to form during puberty and the clearest indicator of the onset of puberty is the age of the onset of the first menstruation - menarche. Conclusions. 1. Physical exercise has a positive effect on all major systems of the body and thus promotes health.2. Along with a beneficial effect on human health, physical exercise, if the requirements of sports are not observed, can be harmful.Keywords: girls health, anthropometric, physical development, reproductive health
Procedia PDF Downloads 10483 Jarcho-Levin Syndrome: A Case Report
Authors: Atitallah Sofien, Bouyahia Olfa, Romdhani Meriam, Missaoui Nada, Ben Rabeh Rania, Yahyaoui Salem, Mazigh Sonia, Boukthir Samir
Abstract:
Introduction: Spondylothoracic dysostosis, also known as Jarcho-Levin syndrome, is defined by a shortened neck and thorax, a protruding abdomen, inguinal and umbilical hernias, atypical spinal structure and rib fusion, leading to restricted chest movement or difficulty in breathing, along with urinary tract abnormalities and, potentially severe scoliosis. Aim: This is the case of a patient diagnosed with Jarcho-Levin syndrome, aiming to detail the range of abnormalities observed in this syndrome, the observed complications, and the therapeutic approaches employed. Results: A three-month-old male infant, born of a consanguineous marriage, delivered at full term by cesarean section, was admitted to the pediatric department for severe acute bronchiolitis. In his prenatal history, morphological ultrasound revealed macrosomia, a shortened spine, irregular vertebrae with thickened skin, normal fetal cardiac ultrasound, and the absence of the right kidney. His perinatal history included respiratory distress, requiring ventilatory support for five days. Upon physical examination, he had stunted growth, scoliosis, a short neck and trunk, longer upper limbs compared to lower limbs, varus equinus in the right foot, a neural tube defect, a low hairline, and low-set ears. Spondylothoracic dysostosis was suspected, leading to further investigations, including a normal transfontaneous ultrasound, a spinal cord ultrasound revealing a lipomyelocele-type closed dysraphism with a low-attached cord, an abdominal ultrasound indicating a single left kidney, and a cardiac ultrasound identifying Kommerell syndrome. Due to a lack of resources, genetic testing could not be performed, and the diagnosis was based on clinical criteria. Conclusion: Jarcho-Levin syndrome can result in a mortality rate of about 50%, primarily due to respiratory complications associated with thoracic insufficiency syndrome. Other complications, like heart and neural tube defects, can also lead to premature mortality. Therefore, early diagnosis and comprehensive treatment involving various specialists are essential.Keywords: Jarcho-Levin syndrome, congenital disorder, scoliosis, spondylothoracic dysostosis, neural tube defect
Procedia PDF Downloads 58