Search results for: linear motor motion stage
165 The Study of Fine and Nanoscale Gold in the Ores of Primary Deposits and Gold-Bearing Placers of Kazakhstan
Authors: Omarova Gulnara, Assubayeva Saltanat, Tugambay Symbat, Bulegenov Kanat
Abstract:
The article discusses the problem of developing a methodology for studying thin and nanoscale gold in ores and placers of primary deposits, which will allow us to develop schemes for revealing dispersed gold inclusions and thus improve its recovery rate to increase the gold reserves of the Republic of Kazakhstan. The type of studied gold, is characterized by a number of features. In connection with this, the conditions of its concentration and distribution in ore bodies and formations, as well as the possibility of reliably determining it by "traditional" methods, differ significantly from that of fine gold (less than 0.25 microns) and even more so from that of larger grains. The mineral composition of rocks (metasomatites) and gold ore and the mineralization associated with them were studied in detail on the Kalba ore field in Kazakhstan. Mineralized zones were identified, and samples were taken from them for analytical studies. The research revealed paragenetic relationships of newly formed mineral formations at the nanoscale, which makes it possible to clarify the conditions for the formation of deposits with a particular type of mineralization. This will provide significant assistance in developing a scheme for study. Typomorphic features of gold were revealed, and mechanisms of formation and aggregation of gold nanoparticles were proposed. The presence of a large number of particles isolated at the laboratory stage from concentrates of gravitational enrichment can serve as an indicator of the presence of even smaller particles in the object. Even the most advanced devices based on gravitational methods for gold concentration provide extraction of metal at a level of around 50%, while pulverized metal is extracted much worse, and gold of less than 1 micron size is extracted at only a few percent. Therefore, when particles of gold smaller than 10 microns are detected, their actual numbers may be significantly higher than expected. In particular, at the studied sites, enrichment of slurry and samples with volumes up to 1 m³ was carried out using a screw lock or separator to produce a final concentrate weighing up to several kilograms. Free gold particles were extracted from the concentrates in the laboratory using a number of processes (magnetic and electromagnetic separation, washing with bromoform in a cup to obtain an ultracontentrate, etc.) and examined under electron microscopes to investigate the nature of their surface and chemical composition. The main result of the study was the detection of gold nanoparticles located on the surface of loose metal grains. The most characteristic forms of gold secretions are individual nanoparticles and aggregates of different configurations. Sometimes, aggregates form solid dense films, deposits, and crusts, all of which are confined to the negative forms of the nano- and microrelief on the surfaces of golden. The results will provide significant knowledge about the prevalence and conditions for the distribution of fine and nanoscale gold in Kazakhstan deposits, as well as the development of methods for studying it, which will minimize losses of this type of gold during extraction. Acknowledgments: This publication has been produced within the framework of the Grant "Development of methodology for studying fine and nanoscale gold in ores of primary deposits, placers and products of their processing" (АР23485052, №235/GF24-26).Keywords: electron microscopy, microminerology, placers, thin and nanoscale gold
Procedia PDF Downloads 21164 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City
Authors: Jothilakshmy Nagammal
Abstract:
This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.Keywords: context based planning model, form based code, transect, systemic approach
Procedia PDF Downloads 338163 Regulation Effect of Intestinal Microbiota by Fermented Processing Wastewater of Yuba
Authors: Ting Wu, Feiting Hu, Xinyue Zhang, Shuxin Tang, Xiaoyun Xu
Abstract:
As a by-product of yuba, processing wastewater of Yuba (PWY) contains many bioactive components such as soybean isoflavones, soybean polysaccharides and soybean oligosaccharides, which is a good source of prebiotics and has a potential of high value utilization. The use of Lactobacillus plantarum to ferment PWY can be considered as a potential biogenic element, which can regulate the balance of intestinal microbiota. In this study, firstly, Lactobacillus plantarum was used to ferment PWY to improve its content of active components and antioxidant activity. Then, the health effect of fermented processing wastewater of yuba (FPWY) was measured in vitro. Finally, microencapsulation technology was used applied to improve the sustained release of FPWY and reduce the loss of active components in the digestion process, as well as to improving the activity of FPWY. The main results are as follows: (1) FPWY presented a good antioxidant capacity with DPPH free radical scavenging ability (0.83 ± 0.01 mmol Trolox/L), ABTS free radical scavenging ability (7.47 ± 0.35 mmol Trolox/L) and iron ion reducing ability (1.11 ± 0.07 mmol Trolox/L). Compared with non-fermented processing wastewater of yuba (NFPWY), there was no significant difference in the content of total soybean isoflavones, but the content of glucoside soybean isoflavones decreased, and aglyconic soybean isoflavones increased significantly. After fermentation, PWY can effectively reduce the soluble monosaccharides, disaccharides and oligosaccharides, such as glucose, fructose, galactose, trehalose, stachyose, maltose, raffinose and sucrose. (2) FPWY can significantly enhance the growth of beneficial bacteria such as Bifidobacterium, Ruminococcus and Akkermansia, significantly inhibit the growth of harmful bacteria E.coli, regulate the structure of intestinal microbiota, and significantly increase the content of short-chain fatty acids such as acetic acid, propionic acid, butyric acid, isovaleric acid. Higher amount of lactic acid in the gut can be further broken down into short chain fatty acids. (3) In order to improve the stability of soybean isoflavones in FPWY during digestion, sodium alginate and chitosan were used as wall materials for embedding. The FPWY freeze-dried powder was embedded by the method of acute-coagulation bath. The results show that when the core wall ratio is 3:1, the concentration of chitosan is 1.5%, the concentration of sodium alginate is 2.0%, and the concentration of calcium is 3%, the embossing rate is 53.20%. In the simulated in vitro digestion stage, the release rate of microcapsules reached 59.36% at the end of gastric digestion and 82.90% at the end of intestinal digestion. Therefore, the core materials with good sustained-release performance of microcapsules were almost all released. The structural analysis results of FPWY microcapsules show that the microcapsules have good mechanical properties. Its hardness, springness, cohesiveness, gumminess, chewiness and resilience were 117.75± 0.21 g, 0.76±0.02, 0.54±0.01, 63.28±0.71 g·sec, 48.03±1.37 g·sec, 0.31±0.01, respectively. Compared with the unembedded FPWY, the infrared spectrum results showed that the microcapsules had embedded effect on the FPWY freeze-dried powder.Keywords: processing wastewater of yuba, lactobacillus plantarum, intestinal microbiota, microcapsule
Procedia PDF Downloads 76162 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa
Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini
Abstract:
Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time
Procedia PDF Downloads 152161 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices
Authors: Kaustav Mukherjee
Abstract:
In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parametersKeywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss
Procedia PDF Downloads 132160 Effect of Radioprotectors on DNA Repair Enzyme and Survival of Gamma-Irradiated Cell Division Cycle Mutants of Saccharomyces pombe
Authors: Purva Nemavarkar, Badri Narain Pandey, Jitendra Kumar
Abstract:
Introduction: The objective was to understand the effect of various radioprotectors on DNA damage repair enzyme and survival in gamma-irradiated wild and cdc mutants of S. pombe (fission yeast) cultured under permissive and restrictive conditions. DNA repair process, as influenced by radioprotectors, was measured by activity of DNA polymerase in the cells. The use of single cell gel electrophoresis assay (SCGE) or Comet Assay to follow gamma-irradiation induced DNA damage and effect of radioprotectors was employed. In addition, studying the effect of caffeine at different concentrations on S-phase of cell cycle was also delineated. Materials and Methods: S. pombe cells grown at permissive temperature (250C) and/or restrictive temperature (360C) were followed by gamma-radiation. Percentage survival and activity of DNA Polymerase (yPol II) were determined after post-irradiation incubation (5 h) with radioprotectors such as Caffeine, Curcumin, Disulphiram, and Ellagic acid (the dose depending on individual D 37 values). The gamma-irradiated yeast cells (with and without the radioprotectors) were spheroplasted by enzyme glusulase and subjected to electrophoresis. Radio-resistant cells were obtained by arresting cells in S-phase using transient treatment of hydroxyurea (HU) and studying the effect of caffeine at different concentrations on S-phase of cell cycle. Results: The mutants of S. pombe showed insignificant difference in survival when grown under permissive conditions. However, growth of these cells under restrictive temperature leads to arrest in specific phases of cell cycle in different cdc mutants (cdc10: G1 arrest, cdc22: early S arrest, cdc17: late S arrest, cdc25: G2 arrest). All the cdc mutants showed decrease in survival after gamma radiation when grown at permissive and restrictive temperatures. Inclusion of the radioprotectors at respective concentrations during post irradiation incubation showed increase in survival of cells. Activity of DNA polymerase enzyme (yPol II) was increased significantly in cdc mutant cells exposed to gamma-radiation. Following SCGE, a linear relationship was observed between doses of irradiation and the tail moments of comets. The radioprotection of the fission yeast by radioprotectors can be seen by the reduced tail moments of the yeast comets. Caffeine also exhibited its radio-protective ability in radio-resistant S-phase cells obtained after HU treatment. Conclusions: The radioprotectors offered notable radioprotection in cdc mutants when added during irradiation. The present study showed activation of DNA damage repair enzyme (yPol II) and an increase in survival after treatment of radioprotectors in gamma irradiated wild type and cdc mutants of S. pombe cells. Results presented here showed feasibility of applying SCGE in fission yeast to follow DNA damage and radioprotection at high doses, which are not feasible with other eukaryotes. Inclusion of caffeine at 1mM concentration to S phase cells offered protection and did not decrease the cell viability. It can be proved that at minimal concentration, caffeine offered marked radioprotection.Keywords: radiation protection, cell cycle, fission yeast, comet assay, s-phase, DNA repair, radioprotectors, caffeine, curcumin, SCGE
Procedia PDF Downloads 113159 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques
Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu
Abstract:
Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare
Procedia PDF Downloads 64158 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 89157 Mineralized Nanoparticles as a Contrast Agent for Ultrasound and Magnetic Resonance Imaging
Authors: Jae Won Lee, Kyung Hyun Min, Hong Jae Lee, Sang Cheon Lee
Abstract:
To date, imaging techniques have attracted much attention in medicine because the detection of diseases at an early stage provides greater opportunities for successful treatment. Consequently, over the past few decades, diverse imaging modalities including magnetic resonance (MR), positron emission tomography, computed tomography, and ultrasound (US) have been developed and applied widely in the field of clinical diagnosis. However, each of the above-mentioned imaging modalities possesses unique strengths and intrinsic weaknesses, which limit their abilities to provide accurate information. Therefore, multimodal imaging systems may be a solution that can provide improved diagnostic performance. Among the current medical imaging modalities, US is a widely available real-time imaging modality. It has many advantages including safety, low cost and easy access for patients. However, its low spatial resolution precludes accurate discrimination of diseased region such as cancer sites. In contrast, MR has no tissue-penetrating limit and can provide images possessing exquisite soft tissue contrast and high spatial resolution. However, it cannot offer real-time images and needs a comparatively long imaging time. The characteristics of these imaging modalities may be considered complementary, and the modalities have been frequently combined for the clinical diagnostic process. Biominerals such as calcium carbonate (CaCO3) and calcium phosphate (CaP) exhibit pH-dependent dissolution behavior. They demonstrate pH-controlled drug release due to the dissolution of minerals in acidic pH conditions. In particular, the application of this mineralization technique to a US contrast agent has been reported recently. The CaCO3 mineral reacts with acids and decomposes to generate calcium dioxide (CO2) gas in an acidic environment. These gas-generating mineralized nanoparticles generated CO2 bubbles in the acidic environment of the tumor, thereby allowing for strong echogenic US imaging of tumor tissues. On the basis of this previous work, it was hypothesized that the loading of MR contrast agents into the CaCO3 mineralized nanoparticles may be a novel strategy in designing a contrast agent for dual imaging. Herein, CaCO3 mineralized nanoparticles that were capable of generating CO2 bubbles to trigger the release of entrapped MR contrast agents in response to tumoral acidic pH were developed for the purposes of US and MR dual-modality imaging of tumors. Gd2O3 nanoparticles were selected as an MR contrast agent. A key strategy employed in this study was to prepare Gd2O3 nanoparticle-loaded mineralized nanoparticles (Gd2O3-MNPs) using block copolymer-templated CaCO3 mineralization in the presence of calcium cations (Ca2+), carbonate anions (CO32-) and positively charged Gd2O3 nanoparticles. The CaCO3 core was considered suitable because it may effectively shield Gd2O3 nanoparticles from water molecules in the blood (pH 7.4) before decomposing to generate CO2 gas, triggering the release of Gd2O3 nanoparticles in tumor tissues (pH 6.4~7.4). The kinetics of CaCO3 dissolution and CO2 generation from the Gd2O3-MNPs were examined as a function of pH and pH-dependent in vitro magnetic relaxation; additionally, the echogenic properties were estimated to demonstrate the potential of the particles for the tumor-specific US and MR imaging.Keywords: calcium carbonate, mineralization, ultrasound imaging, magnetic resonance imaging
Procedia PDF Downloads 236156 A Next-Generation Pin-On-Plate Tribometer for Use in Arthroplasty Material Performance Research
Authors: Lewis J. Woollin, Robert I. Davidson, Paul Watson, Philip J. Hyde
Abstract:
Introduction: In-vitro testing of arthroplasty materials is of paramount importance when ensuring that they can withstand the performance requirements encountered in-vivo. One common machine used for in-vitro testing is a pin-on-plate tribometer, an early stage screening device that generates data on the wear characteristics of arthroplasty bearing materials. These devices test vertically loaded rotating cylindrical pins acting against reciprocating plates, representing the bearing surfaces. In this study, a pin-on-plate machine has been developed that provides several improvements over current technology, thereby progressing arthroplasty bearing research. Historically, pin-on-plate tribometers have been used to investigate the performance of arthroplasty bearing materials under conditions commonly encountered during a standard gait cycle; nominal operating pressures of 2-6 MPa and an operating frequency of 1 Hz are typical. There has been increased interest in using pin-on-plate machines to test more representative in-vivo conditions, due to the drive to test 'beyond compliance', as well as their testing speed and economic advantages over hip simulators. Current pin-on-plate machines do not accommodate the increased performance requirements associated with more extreme kinematic conditions, therefore a next-generation pin-on-plate tribometer has been developed to bridge the gap between current technology and future research requirements. Methodology: The design was driven by several physiologically relevant requirements. Firstly, an increased loading capacity was essential to replicate the peak pressures that occur in the natural hip joint during running and chair-rising, as well as increasing the understanding of wear rates in obese patients. Secondly, the introduction of mid-cycle load variation was of paramount importance, as this allows for an approximation of the loads present in a gait cycle to be applied and to test the fatigue properties of materials. Finally, the rig must be validated against previous-generation pin-on-plate and arthroplasty wear data. Results: The resulting machine is a twelve station device that is split into three sets of four stations, providing an increased testing capacity compared to most current pin-on-plate tribometers. The loading of the pins is generated using a pneumatic system, which can produce contact pressures of up to 201 MPa on a 3.2 mm² round pin face. This greatly exceeds currently achievable contact pressures in literature and opens new research avenues such as testing rim wear of mal-positioned hip implants. Additionally, the contact pressure of each set can be changed independently of the others, allowing multiple loading conditions to be tested simultaneously. Using pneumatics also allows the applied pressure to be switched ON/OFF mid-cycle, another feature not currently reported elsewhere, which allows for investigation into intermittent loading and material fatigue. The device is currently undergoing a series of validation tests using Ultra-High-Molecular-Weight-Polyethylene pins and 316L Stainless Steel Plates (polished to a Ra < 0.05 µm). The operating pressures will be between 2-6 MPa, operating at 1 Hz, allowing for validation of the machine against results reported previously in the literature. The successful production of this next-generation pin-on-plate tribometer will, following its validation, unlock multiple previously unavailable research avenues.Keywords: arthroplasty, mechanical design, pin-on-plate, total joint replacement, wear testing
Procedia PDF Downloads 94155 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 502154 Participation of Titanium Influencing the Petrological Assemblage of Mafic Dyke: Salem, South India
Authors: Ayoti Banerjee, Meenakshi Banerjee
Abstract:
The study of metamorphic reaction textures is important in contributing to our understanding of the evolution of metamorphic terranes. Where preserved, they provide information on changes in the P-T conditions during the metamorphic history of the rock, and thus allow us to speculate on the P-T-t evolution of the terrane. Mafic dykes have attracted the attention of petrologists because they act as window to mantle. This rock represents a mafic dyke of doleritic composition. It is fine to medium grained in which clinopyroxene are enclosed by the lath shaped plagioclase grains to form spectacular ophitic texture. At places, sub ophitic texture was also observed. Grains of pyroxene and plagioclase show very less deformation typically plagioclase showing deformed lamella along with plagioclase-clinopyroxene-phyric granoblastic fabric within a groundmass of feldspar microphenocrysts and Fe–Ti oxides. Both normal and reverse zoning were noted in the plagioclase laths. The clinopyroxene grains contain exsolved phases such as orthopyroxene, plagioclase, magnetite, ilmenite along the cleavage traces and the orthopyroxene lamella form granules in the periphery of the clinopyroxene grains. Garnet corona also develops preferentially around plagioclase at the contact of clinopyroxene, ilmenite or magnetite. Tiny quartz and K-fs grains showed symplectic intergrowth with garnet at a few places. The product quartz formed along with garnet rims the coronal garnet and the reacting clinopyroxene. Thin amphibole corona formed along the periphery of deformed plagioclase and clinopyroxene occur as patches over the magmatic minerals. The amphibole coronas cannot be assigned to a late magmatic stage and are interpreted as reactive being restricted to the contact between clinopyroxene and plagioclase, thus postdating the crystallization of both. The amphibole and garnet do not share grain boundary in the entire rock and is thus pointing towards simultaneous crystallization. Olivine is absent. Spectacular myrmekitic growth of orthoclase and quartz rimming the plagioclase is consistent with the potash metasomatic effects that is also found in other rocks of this region. These textural features are consistent with a phase of fluid induced metamorphism (retrogression). But the appearance of coronal garnet and amphibole exclusive of each other reflects the participation if Ti as the prime reason. Presence of Ti as a reactant phase is a must for amphibole forming reactions whereas it is not so in case of garnet forming reactions although the reactants are the same plagioclase and clinopyroxene in both cases. These findings are well validated by petrographical and textural analysis. In order to obtain balanced chemical reactions that explain formation of amphibole and garnet in the mafic dyke rocks a matrix operation technique called Singular Value Decomposition (SVD) was adopted utilizing the measured chemical compositions of the minerals. The computer program C-Space was used for this purpose and the required compositional matrix. Data fed to C-Space was after doing cation-calculation of the oxide percentages obtained from EPMA analysis. The Garnet-Clinopyroxene geothermometer yielded a temperature of 650 degrees Celsius. The Garnet-Clinopyroxene-Plagioclase geobarometer and Al-in amphibole yielded roughly 7.5 kbar pressure.Keywords: corona, dolerite, geothermometer, metasomatism, metamorphic reaction texture, retrogression
Procedia PDF Downloads 278153 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure
Authors: Sara Saboonian, Pierre Filion
Abstract:
The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.Keywords: ecosystem services, green infrastructure, intensification, planning
Procedia PDF Downloads 355152 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow
Authors: Masood Otarod, Ronald M. Supkowski
Abstract:
This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations
Procedia PDF Downloads 269151 Pluripotent Stem Cells as Therapeutic Tools for Limbal Stem Cell Deficiencies and Drug Testing
Authors: Aberdam Edith, Sangari Linda, Petit Isabelle, Aberdam Daniel
Abstract:
Background and Rationale: Transparent avascularised cornea is essential for normal vision and depends on limbal stem cells (LSC) that reside between the cornea and the conjunctiva. Ocular burns or injuries may destroy the limbus, causing limbal stem cell deficiency (LSCD). The cornea becomes vascularised by invaded conjunctival cells, the stroma is scarring, resulting in corneal opacity and loss of vision. Grafted autologous limbus or cultivated autologous LCS can restore the vision, unless the two eyes are affected. Alternative cellular sources have been tested in the last decades, including oral mucosa or hair follicle epithelial cells. However, only partial success has been achieved by the use of these cells since they were not able to uniformly commit into corneal epithelial cells. Human pluripotent stem cells (iPSC) display both unlimited growth capacity and ability to differentiate into any cell type. Our goal was to design a standardized and reproducible protocol to produce transplantable autologous LSC from patients through cell reprogramming technology. Methodology: First, keratinocyte primary culture was established from a small number of plucked hair follicles of healthy donors. The resulting epithelial cells were reprogrammed into induced pluripotent stem cells (iPSCs) and further differentiate into corneal epithelial cells (CEC), according to a robust protocol that recapitulates the main step of corneal embryonic development. qRT-PCR analysis and immunofluorescent staining during the course of differentiation confirm the expression of stage specific markers of corneal embryonic lineage. First appear ectodermal progenitor-specific cytokeratins K8/K18, followed at day 7 by limbal-specific PAX6, TP63 and cytokeratins K5/K14. At day 15, K3/K12+-corneal cells are present. To amplify the iPSC-derived LSC (named COiPSC), intact small epithelial colonies were detached and cultivated in limbal cell-specific medium. In that culture conditions, the COiPSC can be frozen and thaw at any passage, while retaining their corneal characteristics for at least eight passages. To evaluate the potential of COiPSC as an alternative ocular toxicity model, COiPSC were treated at passage P0 to P4 with increasing amounts of SDS and Benzalkonium. Cell proliferation and apoptosis of treated cells was compared to LSC and the SV40-immortalized human corneal epithelial cell line (HCE) routinely used by cosmetological industrials. Of note, HCE are more resistant to toxicity than LSC. At P0, COiPSC were systematically more resistant to chemical toxicity than LSC and even to HCE. Remarkably, this behavior changed with passage since COiPSC at P2 became identical to LSC and thus closer to physiology than HCE. Comparative transcriptome analysis confirmed that COiPSC from P2 are similar to a mixture of LSC and CEC. Finally, by organotypic reconstitution assay, we demonstrated the ability of COiPSC to produce a 3D corneal epithelium on a stromal equivalent made of keratocytes. Conclusion: COiPSC could become valuable for two main applications: (1) an alternative robust tool to perform, in a reproducible and physiological manner, toxicity assays for cosmetic products and pharmacological tests of drugs. (2). COiPSC could become an alternative autologous source for cornea transplantation for LSCD.Keywords: Limbal stem cell deficiency, iPSC, cornea, limbal stem cells
Procedia PDF Downloads 413150 Breast Cancer Therapy-Related Cardiac Dysfunction Identifying in Kazakhstan: Preliminary Findings of the Cohort Study
Authors: Saule Balmagambetova, Zhenisgul Tlegenova, Saule Madinova
Abstract:
Cardiotoxicity associated with anticancer treatment, now defined as cancer therapy-related cardiac dysfunction (CTRCD), accompanies cancer patients and negatively impacts their survivorship. Currently, a cardio-oncological service is being created in Kazakhstan based on the provisions of the European Society of Cardio-oncology (ESC) Guidelines. In the frames of a pilot project, a cohort study on CTRCD conditions was initiated at the Aktobe Cancer center. One hundred twenty-eight newly diagnosed breast cancer patients started on doxorubicin and/or trastuzumab were recruited. Echocardiography with global longitudinal strain (GLS) assessment, biomarkers panel (cardiac troponin (cTnI), brain natriuretic peptide (BNP), myeloperoxidase (MPO), galectin-3 (Gal-3), D-dimers, C-reactive protein (CRP)), and other tests were performed at baseline and every three months. Patients were stratified by the cardiovascular risks according to the ESC recommendations and allocated into the risk groups during the pre-treatment visit. Of them, 10 (7.8%) patients were assigned to the high-risk group, 48 (37.5%) to the medium-risk group, and 70 (54.7%) to the low-risk group, respectively. High-risk patients have been receiving their cardioprotective treatment from the outset. Patients were also divided by treatment - in the anthracycline-based 83 (64.8%), in trastuzumab- only 13 (10.2%), and in the mixed anthracycline/trastuzumab group 32 individuals (25%), respectively. Mild symptomatic CTRCD was revealed and treated in 2 (1.6%) participants, and a mild asymptomatic variant in 26 (20.5%). Mild asymptomatic conditions are defined as left ventricular ejection fraction (LVEF) ≥50% and further relative reduction in GLS by >15% from baseline and/or a further rise in cardiac biomarkers. The listed biomarkers were assessed longitudinally in repeated-measures linear regression models during 12 months of observation. The associations between changes in biomarkers and CTRCD and between changes in biomarkers and LVEF were evaluated. Analysis by risk groups revealed statistically significant differences in baseline LVEF scores (p 0.001), BNP (p 0.0075), and Gal-3 (p 0.0073). Treatment groups found no statistically significant differences at baseline. After 12 months of follow-up, only LVEF values showed a statistically significant difference by risk groups (p 0.0011). When assessing the temporal changes in the studied parameters for all treatment groups, there were statistically significant changes from visit to visit for LVEF (p 0.003); GLS (p 0.0001); BNP (p<0.00001); MPO (p<0.0001); and Gal-3 (p<0.0001). No moderate or strong correlations were found between the biomarkers values and LVEF, between biomarkers and GLS. Between the biomarkers themselves, a moderate, close to strong correlation was established between cTnI and D-dimer (r 0.65, p<0.05). The dose-dependent effect of anthracyclines has been confirmed: the summary dose has a moderate negative impact on GLS values: -r 0.31 for all treatment groups (p<0.05). The present study found myeloperoxidase as a promising biomarker of cardiac dysfunction in the mixed anthracycline/trastuzumab treatment group. The hazard of CTRCD increased by 24% (HR 1.21; 95% CI 1.01;1.73) per doubling in baseline MPO value (p 0.041). Increases in BNP were also associated with CTRCD (HR per doubling, 1.22; 95% CI 1.12;1.69). No cases of chemotherapy discontinuation due to cardiotoxic complications have been recorded. Further observations are needed to gain insight into the ability of biomarkers to predict CTRCD onset.Keywords: breast cancer, chemotherapy, cardiotoxicity, Kazakhstan
Procedia PDF Downloads 92149 Benefits of High Power Impulse Magnetron Sputtering (HiPIMS) Method for Preparation of Transparent Indium Gallium Zinc Oxide (IGZO) Thin Films
Authors: Pavel Baroch, Jiri Rezek, Michal Prochazka, Tomas Kozak, Jiri Houska
Abstract:
Transparent semiconducting amorphous IGZO films have attracted great attention due to their excellent electrical properties and possible utilization in thin film transistors or in photovoltaic applications as they show 20-50 times higher mobility than that of amorphous silicon. It is also known that the properties of IGZO films are highly sensitive to process parameters, especially to oxygen partial pressure. In this study we have focused on the comparison of properties of transparent semiconducting amorphous indium gallium zinc oxide (IGZO) thin films prepared by conventional sputtering methods and those prepared by high power impulse magnetron sputtering (HiPIMS) method. Furthermore we tried to optimize electrical and optical properties of the IGZO thin films and to investigate possibility to apply these coatings on thermally sensitive flexible substrates. We employed dc, pulsed dc, mid frequency sine wave and HiPIMS power supplies for magnetron deposition. Magnetrons were equipped with sintered ceramic InGaZnO targets. As oxygen vacancies are considered to be the main source of the carriers in IGZO films, it is expected that with the increase of oxygen partial pressure number of oxygen vacancies decreases which results in the increase of film resistivity. Therefore in all experiments we focused on the effect of oxygen partial pressure, discharge power and pulsed power mode on the electrical, optical and mechanical properties of IGZO thin films and also on the thermal load deposited to the substrate. As expected, we have observed a very fast transition between low- and high-resistivity films depending on oxygen partial pressure when deposition using conventional sputtering methods/power supplies have been utilized. Therefore we established and utilized HiPIMS sputtering system for enlargement of operation window for better control of IGZO thin film properties. It is shown that with this system we are able to effectively eliminate steep transition between low and high resistivity films exhibited by DC mode of sputtering and the electrical resistivity can be effectively controlled in the wide resistivity range of 10-² to 10⁵ Ω.cm. The highest mobility of charge carriers (up to 50 cm2/V.s) was obtained at very low oxygen partial pressures. Utilization of HiPIMS also led to significant decrease in thermal load deposited to the substrate which is beneficial for deposition on the thermally sensitive and flexible polymer substrates. Deposition rate as a function of discharge power and oxygen partial pressure was also systematically investigated and the results from optical, electrical and structure analysis will be discussed in detail. Most important result which we have obtained demonstrates almost linear control of IGZO thin films resistivity with increasing of oxygen partial pressure utilizing HiPIMS mode of sputtering and highly transparent films with low resistivity were prepared already at low pO2. It was also found that utilization of HiPIMS technique resulted in significant improvement of surface smoothness in reactive mode of sputtering (with increasing of oxygen partial pressure).Keywords: charge carrier mobility, HiPIMS, IGZO, resistivity
Procedia PDF Downloads 297148 SkyCar Rapid Transit System: An Integrated Approach of Modern Transportation Solutions in the New Queen Elizabeth Quay, Perth, Western Australia
Authors: Arfanara Najnin, Michael W. Roach, Jr., Dr. Jianhong Cecilia Xia
Abstract:
The SkyCar Rapid Transit System (SRT) is an innovative intelligent transport system for the sustainable urban transport system. This system will increase the urban area network connectivity and decrease urban area traffic congestion. The SRT system is designed as a suspended Personal Rapid Transit (PRT) system that travels under a guideway 5m above the ground. A driver-less passenger is via pod-cars that hang from slender beams supported by columns that replace existing lamp posts. The beams are setup in a series of interconnecting loops providing non-stop travel from beginning to end to assure journey time. The SRT forward movement is effected by magnetic motors built into the guideway. Passenger stops are at either at line level 5m above the ground or ground level via a spur guideway that curves off the main thoroughfare. The main objective of this paper is to propose an integrated Automated Transit Network (ATN) technology for the future intelligent transport system in the urban built environment. To fulfil the objective a 4D simulated model in the urban built environment has been proposed by using the concept of SRT-ATN system. The methodology for the design, construction and testing parameters of a Technology Demonstrator (TD) for proof of concept and a Simulator (S) has been demonstrated. The completed TD and S will provide an excellent proving ground for the next development stage, the SRT Prototype (PT) and Pilot System (PS). This paper covered by a 4D simulated model in the virtual built environment is to effectively show how the SRT-ATN system works. OpenSim software has been used to develop the model in a virtual environment, and the scenario has been simulated to understand and visualize the proposed SkyCar Rapid Transit Network model. The SkyCar system will be fabricated in a modular form which is easily transported. The system would be installed in increasingly congested city centers throughout the world, as well as in airports, tourist resorts, race tracks and other special purpose for the urban community. This paper shares the lessons learnt from the proposed innovation and provides recommendations on how to improve the future transport system in urban built environment. Safety and security of passengers are prime factors to be considered for this transit system. Design requirements to meet the safety needs to be part of the research and development phase of the project. Operational safety aspects would also be developed during this period. The vehicles, the track and beam systems and stations are the main components that need to be examined in detail for safety and security of patrons. Measures will also be required to protect columns adjoining intersections from errant vehicles in vehicular traffic collisions. The SkyCar Rapid Transit takes advantage of all current disruptive technologies; batteries, sensors and 4G/5G communication and solar energy technologies which will continue to reduce the costs and make the systems more profitable. SkyCar's energy consumption is extremely low compared to other transport systems.Keywords: SkyCar, rapid transit, Intelligent Transport System (ITS), Automated Transit Network (ATN), urban built environment, 4D Visualization, smart city
Procedia PDF Downloads 217147 Decrease in Olfactory Cortex Volume and Alterations in Caspase Expression in the Olfactory Bulb in the Pathogenesis of Alzheimer’s Disease
Authors: Majed Al Otaibi, Melissa Lessard-Beaudoin, Amel Loudghi, Raphael Chouinard-Watkins, Melanie Plourde, Frederic Calon, C. Alexandre Castellano, Stephen Cunnane, Helene Payette, Pierrette Gaudreau, Denis Gris, Rona K. Graham
Abstract:
Introduction: Alzheimer disease (AD) is a chronic disorder that affects millions of individuals worldwide. Symptoms include memory dysfunction, and also alterations in attention, planning, language and overall cognitive function. Olfactory dysfunction is a common symptom of several neurological disorders including AD. Studying the mechanisms underlying the olfactory dysfunction may therefore lead to the discovery of potential biomarkers and/or treatments for neurodegenerative diseases. Objectives: To determine if olfactory dysfunction predicts future cognitive impairment in the aging population and to characterize the olfactory system in a murine model expressing a genetic factor of AD. Method: For the human study, quantitative olfactory tests (UPSIT and OMT) have been done on 93 subjects (aged 80 to 94 years) from the Quebec Longitudinal Study on Nutrition and Successful Aging (NuAge) cohort accepting to participate in the ORCA secondary study. The telephone Modified Mini Mental State examination (t-MMSE) was used to assess cognition levels, and an olfactory self-report was also collected. In a separate cohort, olfactory cortical volume was calculated using MRI results from healthy old adults (n=25) and patients with AD (n=18) using the AAL single-subject atlas and performed with the PNEURO tool (PMOD 3.7). For the murine study, we are using Western blotting, RT-PCR and immunohistochemistry. Result: Human Study: Based on the self-report, 81% of the participants claimed to not suffer from any problem with olfaction. However, based on the UPSIT, 94% of those subjects showed a poor olfactory performance and different forms of microsmia. Moreover, the results confirm that olfactory function declines with age. We also detected a significant decrease in olfactory cortical volume in AD individuals compared to controls. Murine study: Preliminary data demonstrate there is a significant decrease in expression levels of the proform of caspase-3 and the caspase substrate STK3, in the olfactory bulb of mice expressing human APOE4 compared with controls. In addition, there is a significant decrease in the expression level of the caspase-9 proform and caspase-8 active fragment. Analysis of the mature neuron marker, NeuN, shows decreased expression levels of both isoforms. The data also suggest that Iba-1 immunostaining is increased in the olfactory bulb of APOE4 mice compared to wild type mice. Conclusions: The activation of caspase-3 may be the cause of the decreased levels of STK3 through caspase cleavage and may play role in the inflammation observed. In the clinical study, our results suggest that seniors are unaware of their olfactory function status and therefore it is not sufficient to measure olfaction using the self-report in the elderly. Studying olfactory function and cognitive performance in the aging population will help to discover biomarkers in the early stage of the AD.Keywords: Alzheimer's disease, APOE4, cognition, caspase, brain atrophy, neurodegenerative, olfactory dysfunction
Procedia PDF Downloads 258146 Absorptive Capabilities in the Development of Biopharmaceutical Industry: The Case of Bioprocess Development and Research Unit, National Polytechnic Institute
Authors: Ana L. Sánchez Regla, Igor A. Rivera González, María del Pilar Monserrat Pérez Hernández
Abstract:
The ability of an organization to identify and get useful information from external sources, assimilate it, transform and apply to generate products or services with added value is called absorptive capacity. Absorptive capabilities contribute to have market opportunities to firms and get a leader position with respect to others competitors. The Bioprocess Development and Research Unit (UDIBI) is a Research and Development (R&D) laboratory that belongs to the National Polytechnic Institute (IPN), which is a higher education institute in Mexico. The UDIBI was created with the purpose of carrying out R and D activities for the Transferon®, a biopharmaceutical product developed and patented by IPN. The evolution of competence and scientific and technological platform made UDIBI expand its scope by providing technological services (preclínical studies and bio-compatibility evaluation) to the national pharmaceutical industry and biopharmaceutical industry. The relevance of this study is that those industries are classified as high scientific and technological intensity, and yet, after a review of the state of the art, there is only one study of absorption capabilities in biopharmaceutical industry with a similar scope to this research; in the case of Mexico, there is none. In addition to this, UDIBI belongs to a public university and its operation does not depend on the federal budget, but on the income generated by its external technological services. This fact represents a highly remarkable case in Mexico's public higher education context. This current doctoral research (2015-2019) is contextualized within a case study, its main objective is to identify and analyze the absorptive capabilities that characterise the UDIBI that allows it had become in a one of two third authorized laboratory by the sanitary authority in Mexico for developed bio-comparability studies to bio-pharmaceutical products. The development of this work in the field is divided into two phases. In a first phase, 15 interviews were conducted with the UDIBI personnel, covering management levels, heads of services, project leaders and laboratory personnel. These interviews were structured under a questionnaire, which was designed to integrate open questions and to a lesser extent, others, whose answers would be answered on a Likert-type rating scale. From the information obtained in this phase, a scientific article was made (in review and a proposal of presentation was submitted in different academic forums. A second stage will be made from the conduct of an ethnographic study within this organization under study that will last about 3 months. On the other hand, it is intended to carry out interviews with external actors around the UDIBI (suppliers, advisors, IPN officials, including contact with an academic specialized in absorption capacities to express their comments on this thesis. The inicial findings had shown two lines: i) exist institutional, technological and organizational management elements that encourage and/or limit the creation of absorption capacities in this scientific and technological laboratory and, ii) UDIBI has had created a set of multiple transfer technology of knowledge mechanisms which have had permitted to build a huge base of prior knowledge.Keywords: absorptive capabilities, biopharmaceutical industry, high research and development intensity industries, knowledge management, transfer of knowledge
Procedia PDF Downloads 225145 Social Licence to Operate Methodology to Secure Commercial, Community and Regulatory Approval for Small and Large Scale Fisheries
Authors: Kelly S. Parkinson, Katherine Y. Teh-White
Abstract:
Futureye has a bespoke social licence to operate methodology which has successfully secured community approval and commercial return for fisheries which have faced regulatory and financial risk. This unique approach to fisheries management focuses on delivering improved social and environmental outcomes to support the fishing industry make steps towards achieving the United Nations SDGs. An SLO is the community’s implicit consent for a business or project to exist. An SLO must be earned and maintained alongside regulatory licences. In current and new operations, it helps you to anticipate and measure community concerns around your operations – leading to more predictable and sensible policy outcomes that will not jeopardise your commercial returns. Rising societal expectations and increasing activist sophistication mean the international fishing industry needs to resolve community concerns at each stage their supply chain. Futureye applied our tested social licence to operate (SLO) methodology to help Austral Fisheries who was being attacked by activists concerned about the sustainability of Patagonian Toothfish. Austral was Marine Stewardship Council certified, but pirates were making the overall catch unsustainable. Austral wanted to be carbon neutral. SLO provides a lens on the risk that helps industries and companies act before regulatory and political risk escalates. To do this assessment, we have a methodology that assesses the risk that we can then translate into a process to create a strategy. 1) Audience: we understand the drivers of change and the transmission of those drivers across all audience segments. 2) Expectation: we understand the level of social norming of changing expectations. 3) Outrage: we understand the technical and perceptual aspects of risk and the opportunities to mitigate these. 4) Inter-relationships: we understand the political, regulatory, and reputation system so that we can understand the levers of change. 5) Strategy: we understand whether the strategy will achieve a social licence through bringing the internal and external stakeholders on the journey. Futureye’s SLO methodologies helped Austral to understand risks and opportunities to enhance its resilience. Futureye reviewed the issues, assessed outrage and materiality and mapped SLO threats to the company. Austral was introduced to a new way that it could manage activism, climate action, and responsible consumption. As a result of Futureye’s work, Austral worked closely with Sea Shepherd who was campaigning against pirates illegally fishing Patagonian Toothfish as well as international governments. In 2016 Austral launched the world’s first carbon neutral fish which won Austral a thirteen percent premium for tender on the open market. In 2017, Austral received the prestigious Banksia Foundation Sustainability Leadership Award for seafood that is sustainable, healthy and carbon neutral. Austral’s position as a leader in sustainable development has opened doors for retailers all over the world. Futureye’s SLO methodology can identify the societal, political and regulatory risks facing fisheries and position them to proactively address the issues and become an industry leader in sustainability.Keywords: carbon neutral, fisheries management, risk communication, social licence to operate, sustainable development
Procedia PDF Downloads 120144 Formulation of a Submicron Delivery System including a Platelet Lysate to Be Administered in Damaged Skin
Authors: Sergio A. Bernal-Chavez, Sergio Alcalá-Alcalá, Doris A. Cerecedo-Mercado, Adriana Ganem-Rondero
Abstract:
The prevalence of people with chronic wounds has increased dramatically by many factors including smoking, obesity and chronic diseases, such as diabetes, that can slow the healing process and increase the risk of becoming chronic. Because of this situation, the improvement of chronic wound treatments is a necessity, which has led to the scientific community to focus on improving the effectiveness of current therapies and the development of new treatments. The wound formation is a physiological complex process, which is characterized by an inflammatory stage with the presence of proinflammatory cells that create a proteolytic microenvironment during the healing process, which includes the degradation of important growth factors and cytokines. This decrease of growth factors and cytokines provides an interesting strategy for wound healing if they are administered externally. The use of nanometric drug delivery systems, such as polymer nanoparticles (NP), also offers an interesting alternative around dermal systems. An interesting strategy would be to propose a formulation based on a thermosensitive hydrogel loaded with polymeric nanoparticles that allows the inclusion and application of a platelet lysate (PL) on damaged skin, with the aim of promoting wound healing. In this work, NP were prepared by a double emulsion-solvent evaporation technique, using polylactic-co-glycolic acid (PLGA) as biodegradable polymer. Firstly, an aqueous solution of PL was emulsified into a PLGA organic solution, previously prepared in dichloromethane (DCM). Then, this disperse system (W/O) was poured into a polyvinyl alcohol (PVA) solution to get the double emulsion (W/O/W), finally the DCM was evaporated by magnetic stirring resulting in the NP formation containing PL. Once the NP were obtained, these systems were characterized by morphology, particle size, Z-potential, encapsulation efficiency (%EE), physical stability, infrared spectrum, calorimetric studies (DSC) and in vitro release profile. The optimized nanoparticles were included in a thermosensitive gel formulation of Pluronic® F-127. The gel was prepared by the cold method at 4 °C and 20% of polymer concentration. Viscosity, sol-gel phase transition, time of no flow solid-gel at wound temperature, changes in particle size by temperature-effect using dynamic light scattering (DLS), occlusive effect, gel degradation, infrared spectrum and micellar point by DSC were evaluated in all gel formulations. PLGA NP of 267 ± 10.5 nm and Z-potential of -29.1 ± 1 mV were obtained. TEM micrographs verified the size of NP and evidenced their spherical shape. The %EE for the system was around 99%. Thermograms and in infrared spectra mark the presence of PL in NP. The systems did not show significant changes in the parameters mentioned above, during the stability studies. Regarding the gel formulation, the transition sol-gel occurred at 28 °C with a time of no flow solid-gel of 7 min at 33°C (common wound temperature). Calorimetric, DLS and infrared studies corroborated the physical properties of a thermosensitive gel, such as the micellar point. In conclusion, the thermosensitive gel described in this work, contains therapeutic amounts of PL and fulfills the technological properties to be used in damaged skin, with potential application in wound healing and tissue regeneration.Keywords: growth factors, polymeric nanoparticles, thermosensitive hydrogels, tissue regeneration
Procedia PDF Downloads 172143 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates
Authors: Jennifer Buz, Alvin Spivey
Abstract:
The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation
Procedia PDF Downloads 129142 Illness-Related PTSD Among Type 1 Diabetes Patients
Authors: Omer Zvi Shaked, Amir Tirosh
Abstract:
Type 1 Diabetes (T1DM) is an incurable chronic illness with no known preventive measures. Excess to insulin therapy can lead to hypoglycemia with neuro-glycogenic symptoms such as shakiness, nausea, sweating, irritability, fatigue, excessive thirst or hunger, weakness, seizure, and coma. Severe Hypoglycemia (SH) is also considered a most aversive event since it may put patients at risk for injury and death, which matches the criteria of a traumatic event. SH has a ranging prevalence of 20%, which makes it a primary medical Issue. One of the results of SH is an intense emotional fear reaction resembling the form of post-traumatic stress symptoms (PTS), causing many patients to avoid insulin therapy and social activities in order to avoid the possibility of hypoglycemia. As a result, they are at risk for irreversible health deterioration and medical complications. Fear of Hypoglycemia (FOH) is, therefore, a major disturbance for T1DM patients. FOH differs from prevalent post-traumatic stress reactions to other forms of traumatic events since the threat to life continuously exists in the patient's body. That is, it is highly probable that orthodox interventions may not be sufficient for helping patients after SH to regain healthy social function and proper medical treatment. Accordingly, the current presentation will demonstrate the results of a study conducted among T1DM patients after SH. The study was designed in two stages. First, a preliminary qualitative phenomenological study among ten patients after SH was conducted. Analysis revealed that after SH, patients confuse between stress symptoms and Hypoglycemia symptoms, divide life before and after the event, report a constant sense of fear, a loss of freedom, a significant decrease in social functioning, a catastrophic thinking pattern, a dichotomous split between the self and the body, and internalization of illness identity, a loss of internal locus of control, a damaged self-representation, and severe loneliness for never being understood by others. The second stage was a two steps study of intervention among five patients after SH. The first part of the intervention included three months of therapeutic 3rd wave CBT therapy. The contents of the therapeutic process were: acceptance of fear and tolerance to stress; cognitive de-fusion combined with emotional self-regulation; the adoption of an active position relying on personal values; and self-compassion. Then, the intervention included a one-week practical real-time 24/7 support by trained medical personnel, alongside a gradual exposure to increased insulin therapy in a protected environment. The results of the intervention are a decrease in stress symptoms, increased social functioning, increased well-being, and decreased avoidance of medical treatment. The presentation will discuss the unique emotional state of T1DM patients after SH. Then, the presentation will discuss the effectiveness of the intervention for patients with chronic conditions after a traumatic event. The presentation will make evident the unique situation of illness-related PTSD. The presentation will also demonstrate the requirement for multi-professional collaboration between social work and medical care for populations with chronic medical conditions. Limitations of the study and recommendations for further research will be discussed.Keywords: type 1 diabetes, chronic illness, post-traumatic stress, illness-related PTSD
Procedia PDF Downloads 177141 Subway Ridership Estimation at a Station-Level: Focus on the Impact of Bus Demand, Commercial Business Characteristics and Network Topology
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The primary purpose of this study is to develop a methodological framework to predict daily subway ridership at a station-level and to examine the association between subway ridership and bus demand incorporating commercial business facility in the vicinity of each subway station. The socio-economic characteristics, land-use, and built environment as factors may have an impact on subway ridership. However, it should be considered not only the endogenous relationship between bus and subway demand but also the characteristics of commercial business within a subway station’s sphere of influence, and integrated transit network topology. Regarding a statistical approach to estimate subway ridership at a station level, therefore it should be considered endogeneity and heteroscedastic issues which might have in the subway ridership prediction model. This study focused on both discovering the impacts of bus demand, commercial business characteristics, and network topology on subway ridership and developing more precise subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers entire Seoul city in South Korea and includes 243 stations with the temporal scope set at twenty-four hours with one-hour interval time panels each. The data for subway and bus ridership was collected Seoul Smart Card data from 2015 and 2016. Three-Stage Least Square(3SLS) approach was applied to develop daily subway ridership model as capturing the endogeneity and heteroscedasticity between bus and subway demand. Independent variables incorporating in the modeling process were commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. As a result, it was found that bus ridership and subway ridership were endogenous each other and they had a significantly positive sign of coefficients which means one transit mode could increase another transportation mode’s ridership. In other words, two transit modes of subway and bus have a mutual relationship instead of the competitive relationship. The commercial business characteristics are the most critical dimension among the independent variables. The variables of commercial business facility rate in the paper containing six types; medical, educational, recreational, financial, food service, and shopping. From the model result, a higher rate in medical, financial buildings, shopping, and food service facility lead to increment of subway ridership at a station, while recreational and educational facility shows lower subway ridership. The complex network theory was applied for estimating integrated network topology measures that cover the entire Seoul transit network system, and a framework for seeking an impact on subway ridership. The centrality measures were found to be significant and showed a positive sign indicating higher centrality led to more subway ridership at a station level. The results of model accuracy tests by out of samples provided that 3SLS model has less mean square error rather than OLS and showed the methodological approach for the 3SLS model was plausible to estimate more accurate subway ridership. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017R1C1B2010175).Keywords: subway ridership, bus ridership, commercial business characteristic, endogeneity, network topology
Procedia PDF Downloads 144140 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 145139 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 391138 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach
Authors: Dominik Kowitzke
Abstract:
Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.Keywords: empty nests, environment, Germany, households, over housing
Procedia PDF Downloads 171137 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating
Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing
Procedia PDF Downloads 175136 Audience Members' Perspective-Taking Predicts Accurate Identification of Musically Expressed Emotion in a Live Improvised Jazz Performance
Authors: Omer Leshem, Michael F. Schober
Abstract:
This paper introduces a new method for assessing how audience members and performers feel and think during live concerts, and how audience members' recognized and felt emotions are related. Two hypotheses were tested in a live concert setting: (1) that audience members’ cognitive perspective taking ability predicts their accuracy in identifying an emotion that a jazz improviser intended to express during a performance, and (2) that audience members' affective empathy predicts their likelihood of feeling the same emotions as the performer. The aim was to stage a concert with audience members who regularly attend live jazz performances, and to measure their cognitive and affective reactions during the performance as non-intrusively as possible. Pianist and Grammy nominee Andy Milne agreed, without knowing details of the method or hypotheses, to perform a full-length solo improvised concert that would include an ‘unusual’ piece. Jazz fans were recruited through typical advertising for New York City jazz performances. The event was held at the New School’s Glass Box Theater, the home of leading NYC jazz venue ‘The Stone.’ Audience members were charged typical NYC jazz club admission prices; advertisements informed them that anyone who chose to participate in the study would be reimbursed their ticket price after the concert. The concert, held in April 2018, had 30 attendees, 23 of whom participated in the study. Twenty-two minutes into the concert, the performer was handed a paper note with the instruction: ‘Perform a 3-5-minute improvised piece with the intention of conveying sadness.’ (Sadness was chosen based on previous music cognition lab studies, where solo listeners were less likely to select sadness as the musically-expressed emotion accurately from a list of basic emotions, and more likely to misinterpret sadness as tenderness). Then, audience members and the performer were invited to respond to a questionnaire from a first envelope under their seat. Participants used their own words to describe the emotion the performer had intended to express, and then to select the intended emotion from a list. They also reported the emotions they had felt while listening using Izard’s differential emotions scale. The concert then continued as usual. At the end, participants answered demographic questions and Davis’ interpersonal reactivity index (IRI), a 28-item scale designed to assess both cognitive and affective empathy. Hypothesis 1 was supported: audience members with greater cognitive empathy were more likely to accurately identify sadness as the expressed emotion. Moreover, audience members who accurately selected ‘sadness’ reported feeling marginally sadder than people who did not select sadness. Hypotheses 2 was not supported; audience members with greater affective empathy were not more likely to feel the same emotions as the performer. If anything, members with lower cognitive perspective-taking ability had marginally greater emotional overlap with the performer, which makes sense given that these participants were less likely to identify the music as sad, which corresponded with the performer’s actual feelings. Results replicate findings from solo lab studies in a concert setting and demonstrate the viability of exploring empathy and collective cognition in improvised live performance.Keywords: audience, cognition, collective cognition, emotion, empathy, expressed emotion, felt emotion, improvisation, live performance, recognized emotion
Procedia PDF Downloads 132