Search results for: random generation
4130 Development of a Methodology for Surgery Planning and Control: A Management Approach to Handle the Conflict of High Utilization and Low Overtime
Authors: Timo Miebach, Kirsten Hoeper, Carolin Felix
Abstract:
In times of competitive pressures and demographic change, hospitals have to reconsider their strategies as a company. Due to the fact, that operations are one of the main income and one of the primary cost drivers otherwise, a process-oriented approach and an efficient use of resources seems to be the right way for getting a consistent market position. Thus, the efficient operation room occupancy planning is an important cause variable for the success and continued the existence of these institutions. A high utilization of resources is essential. This means a very high, but nevertheless sensible capacity-oriented utilization of working systems that can be realized by avoiding downtimes and a thoughtful occupancy planning. This engineering approach should help hospitals to reach her break-even point. Firstly, the aim is to establish a strategy point, which can be used for the generation of a planned throughput time. Secondly, the operation planning and control should be facilitated and implemented accurately by the generation of time modules. More than 100,000 data records of the Hannover Medical School were analyzed. The data records contain information about the type of conducted operation, the duration of the individual process steps, and all other organizational-specific data such as an operating room. Based on the aforementioned data base, a generally valid model was developed by an analysis to define a strategy point which takes the conflict of capacity utilization and low overtime into account. Furthermore, time modules were generated in this work, which allows a simplified and flexible operation planning and control for the operation manager. By the time modules, it is possible to reduce a high average value of the idle times of the operation rooms. Furthermore, the potential is used to minimize the idle time spread.Keywords: capacity, operating room, surgery planning and control, utilization
Procedia PDF Downloads 2524129 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain
Authors: Zachary Blanks, Solomon Sonya
Abstract:
Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection
Procedia PDF Downloads 2924128 Design and Fabrication of Pulse Detonation Engine Based on Numerical Simulation
Authors: Vishal Shetty, Pranjal Khasnis, Saptarshi Mandal
Abstract:
This work explores the design and fabrication of a fundamental pulse detonation engine (PDE) prototype on the basis of pressure and temperature pulse obtained from numerical simulation of the same. PDE is an advanced propulsion system that utilizes detonation waves for thrust generation. PDEs use a fuel-air mixture ignited to create a supersonic detonation wave, resulting in rapid energy release, high pressures, and high temperatures. The operational cycle includes fuel injection, ignition, detonation, exhaust of combustion products, and purging of the chamber for the next cycle. This work presents details of the core operating principles of a PDE, highlighting its potential advantages over traditional jet engines that rely on continuous combustion. The design focuses on a straightforward, valve-controlled system for fuel and oxidizer injection into a detonation tube. The detonation was initiated using an electronically controlled spark plug or similar high-energy ignition source. Following the detonation, a purge valve was employed to expel the combusted gases and prepare the tube for the next cycle. Key considerations for the design include material selection for the detonation tube to withstand the high temperatures and pressures generated during detonation. Fabrication techniques prioritized readily available machining methods to create a functional prototype. This work detailed the testing procedures for verifying the functionality of the PDE prototype. Emphasis was given to the measurement of thrust generation and capturing of pressure data within the detonation tube. The numerical analysis presents performance evaluation and potential areas for future design optimization.Keywords: pulse detonation engine, ignition, detonation, combustion
Procedia PDF Downloads 204127 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 2064126 Burnout and Personality Characteristics of University Students
Authors: Tazvin Ijaz, Rabia Khan
Abstract:
The current study was conducted to identify the predictors of burnout among university students. The sample for the study was collected through simple random sampling. The tools to measure burnout and personality characteristics included Indigenous burnout scale and Eysenck personality inventory respectively. Results indicated that neurotic personality traits significantly predicts burnout among university students while extraversion does not lead to burnout. Results also indicated female students experience more burnout than male students. It was also found that family size and birth order did not affected the level of burnout. Results of the study are discussed to explain association between etiological factors and burnout with in Pakistani cultural context.Keywords: burnout, students, neuroticism, extraversion
Procedia PDF Downloads 2954125 Design and Analysis of a Combined Cooling, Heating and Power Plant for Maximum Operational Flexibility
Authors: Salah Hosseini, Hadi Ramezani, Bagher Shahbazi, Hossein Rabiei, Jafar Hooshmand, Hiwa Khaldi
Abstract:
Diversity of energy portfolio and fluctuation of urban energy demand establish the need for more operational flexibility of combined Cooling, Heat, and Power Plants. Currently, the most common way to achieve these specifications is the use of heat storage devices or wet operation of gas turbines. The current work addresses using variable extraction steam turbine in conjugation with a gas turbine inlet cooling system as an alternative way for enhancement of a CCHP cycle operating range. A thermodynamic model is developed and typical apartments building in PARDIS Technology Park (located at Tehran Province) is chosen as a case study. Due to the variable Heat demand and using excess chiller capacity for turbine inlet cooling purpose, the mentioned steam turbine and TIAC system provided an opportunity for flexible operation of the cycle and boosted the independence of the power and heat generation in the CCHP plant. It was found that the ratio of power to the heat of CCHP cycle varies from 12.6 to 2.4 depending on the City heating and cooling demands and ambient condition, which means a good independence between power and heat generation. Furthermore, selection of the TIAC design temperature is done based on the amount of ratio of power gain to TIAC coil surface area, it was found that for current cycle arrangement the TIAC design temperature of 15 C is most economical. All analysis is done based on the real data, gathered from the local weather station of the PARDIS site.Keywords: CCHP plant, GTG, HRSG, STG, TIAC, operational flexibility, power to heat ratio
Procedia PDF Downloads 2814124 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 2404123 All-Optical Gamma-Rays and Positrons Source by Ultra-Intense Laser Irradiating an Al Cone
Authors: T. P. Yu, J. J. Liu, X. L. Zhu, Y. Yin, W. Q. Wang, J. M. Ouyang, F. Q. Shao
Abstract:
A strong electromagnetic field with E>1015V/m can be supplied by an intense laser such as ELI and HiPER in the near future. Exposing in such a strong laser field, laser-matter interaction enters into the near quantum electrodynamics (QED) regime and highly non-linear physics may occur during the laser-matter interaction. Recently, the multi-photon Breit-Wheeler (BW) process attracts increasing attention because it is capable to produce abundant positrons and it enhances the positron generation efficiency significantly. Here, we propose an all-optical scheme for bright gamma rays and dense positrons generation by irradiating a 1022 W/cm2 laser pulse onto an Al cone filled with near-critical-density plasmas. Two-dimensional (2D) QED particle-in-cell (PIC) simulations show that, the radiation damping force becomes large enough to compensate for the Lorentz force in the cone, causing radiation-reaction trapping of a dense electron bunch in the laser field. The trapped electrons oscillate in the laser electric field and emits high-energy gamma photons in two ways: (1) nonlinear Compton scattering due to the oscillation of electrons in the laser fields, and (2) Compton backwardscattering resulting from the bunch colliding with the reflected laser by the cone tip. The multi-photon Breit-Wheeler process is thus initiated and abundant electron-positron pairs are generated with a positron density ~1027m-3. The scheme is finally demonstrated by full 3D PIC simulations, which indicate the positron flux is up to 109. This compact gamma ray and positron source may have promising applications in future.Keywords: BW process, electron-positron pairs, gamma rays emission, ultra-intense laser
Procedia PDF Downloads 2604122 Feasibility Study of Tidal Current of the Bay of Bengal to Generate Electricity as a Renewable Energy
Authors: Myisha Ahmad, G. M. Jahid Hasan
Abstract:
Electricity is the pinnacle of human civilization. At present, the growing concerns over significant climate change have intensified the importance of the use of renewable energy technologies for electricity generation. The interest is primarily due to better energy security, smaller environmental impact and providing a sustainable alternative compared to the conventional energy sources. Solar power, wind, biomass, tidal power, and wave power are some of the most reliable sources of renewable energy. Ocean approximately holds 2×10³ TW of energy and has the largest renewable energy resource on the planet. Ocean energy has many forms namely, encompassing tides, ocean circulation, surface waves, salinity and thermal gradients. Ocean tide in particular, associates both potential and kinetic energy. The study is focused on the latter concept that deals with tidal current energy conversion technologies. Tidal streams or marine currents generate kinetic energy that can be extracted by marine current energy devices and converted into transmittable energy form. The principle of technology development is very comparable to that of wind turbines. Conversion of marine tidal resources into substantial electrical power offers immense opportunities to countries endowed with such resources and this work is aimed at addressing such prospects of Bangladesh. The study analyzed the extracted current velocities from numerical model works at several locations in the Bay of Bengal. Based on current magnitudes, directions and available technologies the most fitted locations were adopted and possible annual generation capacity was estimated. The paper also examines the future prospects of tidal current energy along the Bay of Bengal and establishes a constructive approach that could be adopted in future project developments.Keywords: bay of Bengal, energy potential, renewable energy, tidal current
Procedia PDF Downloads 3754121 Manufacture and Characterization of Poly (Tri Methylene Terephthalate) Nanofibers by Electrospinning
Authors: Omid Saligheh
Abstract:
Poly (tri methylene terephthalate) (PTT) nanofibers were prepared by electrospinning, being directly deposited in the form of a random fibers web. The effect of changing processing parameters such as solution concentration and electrospinning voltage on the morphology of the electrospun PTT nanofibers was investigated with scanning electron microscopy (SEM). The electrospun fibers diameter increased with rising concentration and decreased by increasing the electrospinning voltage, thermal and mechanical properties of electrospun fibers were characterized by DSC and tensile testing, respectively.Keywords: poly tri methylene terephthalate, electrospinning, morphology, thermal behavior, mechanical properties
Procedia PDF Downloads 864120 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 3874119 High-Rise Building with PV Facade
Authors: Jiří Hirš, Jitka Mohelnikova
Abstract:
A photovoltaic system integrated into a high-rise building façade was studied. The high-rise building is located in the Central Europe region with temperate climate and dominant partly cloudy and overcast sky conditions. The PV façade has been monitored since 2013. The three-year monitoring of the façade energy generation shows that the façade has an important impact on the building energy efficiency and sustainable operation.Keywords: buildings, energy, PV façade, solar radiation
Procedia PDF Downloads 3084118 Design and Development of Fleet Management System for Multi-Agent Autonomous Surface Vessel
Authors: Zulkifli Zainal Abidin, Ahmad Shahril Mohd Ghani
Abstract:
Agent-based systems technology has been addressed as a new paradigm for conceptualizing, designing, and implementing software systems. Agents are sophisticated systems that act autonomously across open and distributed environments in solving problems. Nevertheless, it is impractical to rely on a single agent to do all computing processes in solving complex problems. An increasing number of applications lately require multiple agents to work together. A multi-agent system (MAS) is a loosely coupled network of agents that interact to solve problems that are beyond the individual capacities or knowledge of each problem solver. However, the network of MAS still requires a main system to govern or oversees the operation of the agents in order to achieve a unified goal. We had developed a fleet management system (FMS) in order to manage the fleet of agents, plan route for the agents, perform real-time data processing and analysis, and issue sets of general and specific instructions to the agents. This FMS should be able to perform real-time data processing, communicate with the autonomous surface vehicle (ASV) agents and generate bathymetric map according to the data received from each ASV unit. The first algorithm is developed to communicate with the ASV via radio communication using standard National Marine Electronics Association (NMEA) protocol sentences. Next, the second algorithm will take care of the path planning, formation and pattern generation is tested using various sample data. Lastly, the bathymetry map generation algorithm will make use of data collected by the agents to create bathymetry map in real-time. The outcome of this research is expected can be applied on various other multi-agent systems.Keywords: autonomous surface vehicle, fleet management system, multi agent system, bathymetry
Procedia PDF Downloads 2714117 Depollution of the Pinheiros River in the City of São Paulo: Mapping the Dynamics of Conflicts and Coalitions between Actors in Two Recent Depollution Projects
Authors: Adalberto Gregorio Back
Abstract:
Historically, the Pinheiros River, which crosses the urban area of the largest South American metropolis, the city of São Paulo, has been the subject of several interventions involving different interests and multiple demands, including the implementation of road axes and industrial occupation in the city, following its floodplains. the dilution of sewers; generation of electricity, with the reversal of its waters to the Billings Dam; and urban drainage. These processes, together with the exclusionary and peripheral urban sprawl with high population density in the peripheries, result in difficulties for the collection and treatment of household sewage, which flow into the tributaries and the Pinheiros River itself. In the last 20 years, two separate projects have been undertaken to clean up its waters. The first one between 2001-2011 was the flotation system, aimed at cleaning the river in its own gutter with equipment installed near the Bilings Dam; and, more recently, from 2019 to 2022, the proposal to connect about 74 thousand dwellings to the sewage collection and treatment system, as well as to install treatment plants in the tributaries of Pinheiros where the connection to the system is impracticable, given the irregular occupations. The purpose of this paper is to make a comparative analysis on the dynamics of conflicts, interests and opportunities of coalitions between the actors involved in the two referred projects of pollution of the Pinheiros River. For this, we use the analysis of documents produced by the state government; as well as documents related to the legal disputes that occurred in the first attempt of decontamination involving the sanitation company; the Billings Dam management company interested in power generation; the city hall and regular and irregular dwellings not linked to the sanitation system.Keywords: depollution of the Pinheiros River, interests groups, São Paulo, water energy nexus
Procedia PDF Downloads 1064116 p210 BCR-ABL1 CML with CMML Clones: A Rare Presentation
Authors: Mona Vijayaran, Gurleen Oberoi, Sanjay Mishra
Abstract:
Introduction: p190 BCR‐ABL1 in CML is often associated with monocytosis. In the case described here, monocytosis is associated with coexisting p210 BCR‐ABL and CMML clones. Mutation analysis using next‐generation sequence (NGS) in our case showed TET2 and SRSF2 mutations. Aims & Objectives: A 75-year male was evaluated for monocytosis and thrombocytopenia. CBC showed Hb-11.8g/dl, TLC-12,060/cmm, Monocytes-35%, Platelets-39,000/cmm. Materials & Methods: Bone marrow examination showed a hypercellular marrow with myeloid series showing sequential maturation up to neutrophils with 30% monocytes. Immunophenotyping by flow cytometry from bone marrow had 3% blasts. Making chronic myelomonocytic leukemia as the likely diagnosis. NGS for myeloid mutation panel had TET2 (48.9%) and SRSF2 (32.5%) mutations. This report further supported the diagnosis of CMML. To fulfil the WHO diagnostic criteria for CMML, a BCR ABL1 by RQ-PCR was sent. The report came positive for p210 (B3A2, B2A2) Major Transcript (M-BCR) % IS of 38.418. Result: The patient was counselled regarding the unique presentation of the presence of 2 clones- P210 CML and CMML. After discussion with an international faculty with vast experience in CMML. It was decided to start this elderly gentleman on Imatinib 200mg and not on azacytidine, as ASXL1 was not present; hence, his chances of progressing to AML would be less and on the other end, if CML is left untreated then chances of progression to blast phase would always be a possibility. After 3 months on Imatinib his platelet count improved to 80,000 to 90,000/cmm, but his monocytosis persists. His 3rd month BCR-ABL1 IS% is 0.004%. Conclusion: After searching the literature, there were no case reports of a coexisting CML p210 with CMML. This case might be the first case report. p190 BCR ABL1 is often associated with monocytosis. There are few case reports of p210 BCR ABL1 positivity in patients with monocytosis but none with coexisting CMML. This case highlights the need for extensively evaluating patients with monocytosis with next-generation sequencing for myeloid mutation panel and BCR-ABL1 by RT-PCR to correctly diagnose and treat them.Keywords: CMML, NGS, p190 CML, Imatinib
Procedia PDF Downloads 774115 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 2054114 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density
Authors: Lalit Kumar, Rashid Al Shidi
Abstract:
Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.Keywords: dubas bug, date palm, tree density, infestation levels
Procedia PDF Downloads 1934113 Survival Data with Incomplete Missing Categorical Covariates
Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar
Abstract:
The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution
Procedia PDF Downloads 4054112 Catalytic Activity Study of Fe, Ti Loaded TUD-1
Authors: Supakorn Tantisriyanurak, Hussaya Maneesuwan, Thanyalak Chaisuwan, Sujitra Wongkasemjit
Abstract:
TUD-1 is a siliceous mesoporous material with a three-dimensional amorphous structure of random, interconnecting pores, large pore size, high surface area (400-1000 m2/g), hydrothermal stability, and tunable porosity. However, the significant disadvantage of the mesoporous silicates is few catalytic active sites. In this work, a series of bimetallic Fe and Ti incorporated into TUD-1 framework is successfully synthesized by sol–gel method. The synthesized Fe,Ti-TUD-1 is characterized by various techniques. To study the catalytic activity of Fe, Ti–TUD-1, phenol hydroxylation was selected as a model reaction. The amounts of residual phenol and oxidation products were determined by high performance liquid chromatography coupled with UV-detector (HPLC-UV).Keywords: iron, phenol hydroxylation, titanium, TUD-1
Procedia PDF Downloads 2584111 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)
Authors: Vinay Kumar Vanjakula, Frank Adam
Abstract:
The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour
Procedia PDF Downloads 1694110 Analysis of Decentralized on Demand Cross Layer in Cognitive Radio Ad Hoc Network
Authors: A. Sri Janani, K. Immanuel Arokia James
Abstract:
Cognitive radio ad hoc networks different unlicensed users may acquire different available channel sets. This non-uniform spectrum availability imposes special design challenges for broadcasting in CR ad hoc networks. Cognitive radio automatically detects available channels in wireless spectrum. This is a form of dynamic spectrum management. Cross-layer optimization is proposed, using this can allow far away secondary users can also involve into channel work. So it can increase the throughput and it will overcome the collision and time delay.Keywords: cognitive radio, cross layer optimization, CR mesh network, heterogeneous spectrum, mesh topology, random routing optimization technique
Procedia PDF Downloads 3594109 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 634108 Features of Fossil Fuels Generation from Bazhenov Formation Source Rocks by Hydropyrolysis
Authors: Anton G. Kalmykov, Andrew Yu. Bychkov, Georgy A. Kalmykov
Abstract:
Nowadays, most oil reserves in Russia and all over the world are hard to recover. That is the reason oil companies are searching for new sources for hydrocarbon production. One of the sources might be high-carbon formations with unconventional reservoirs. Bazhenov formation is a huge source rock formation located in West Siberia, which contains unconventional reservoirs on some of the areas. These reservoirs are formed by secondary processes with low predicting ratio. Only one of five wells is drilled through unconventional reservoirs, in others kerogen has low thermal maturity, and they are of low petroliferous. Therefore, there was a request for tertiary methods for in-situ cracking of kerogen and production of oil. Laboratory experiments of Bazhenov formation rock hydrous pyrolysis were used to investigate features of the oil generation process. Experiments on Bazhenov rocks with a different mineral composition (silica concentration from 15 to 90 wt.%, clays – 5-50 wt.%, carbonates – 0-30 wt.%, kerogen – 1-25 wt.%) and thermal maturity (from immature to late oil window kerogen) were performed in a retort under reservoir conditions. Rock samples of 50 g weight were placed in retort, covered with water and heated to the different temperature varied from 250 to 400°C with the durability of the experiments from several hours to one week. After the experiments, the retort was cooled to room temperature; generated hydrocarbons were extracted with hexane, then separated from the solvent and weighted. The molecular composition of this synthesized oil was then investigated via GC-MS chromatography Characteristics of rock samples after the heating was measured via the Rock-Eval method. It was found, that the amount of synthesized oil and its composition depending on the experimental conditions and composition of rocks. The highest amount of oil was produced at a temperature of 350°C after 12 hours of heating and was up to 12 wt.% of initial organic matter content in the rocks. At the higher temperatures and within longer heating time secondary cracking of generated hydrocarbons occurs, the mass of produced oil is lowering, and the composition contains more hydrocarbons that need to be recovered by catalytical processes. If the temperature is lower than 300°C, the amount of produced oil is too low for the process to be economically effective. It was also found that silica and clay minerals work as catalysts. Selection of heating conditions allows producing synthesized oil with specified composition. Kerogen investigations after heating have shown that thermal maturity increases, but the yield is only up to 35% of the maximum amount of synthetic oil. This yield is the result of gaseous hydrocarbons formation due to secondary cracking and aromatization and coaling of kerogen. Future investigations will allow the increase in the yield of synthetic oil. The results are in a good agreement with theoretical data on kerogen maturation during oil production. Evaluated trends could be tooled up for in-situ oil generation by shale rocks thermal action.Keywords: Bazhenov formation, fossil fuels, hydropyrolysis, synthetic oil
Procedia PDF Downloads 1144107 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing
Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari
Abstract:
A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.Keywords: bacteria chromosome, bacterial identification, sequence, primer generation
Procedia PDF Downloads 1934106 Reclaiming the Lost Jewish Identity of a Second Generation Holocaust Survivor Raised as a Christian: The Role of Art and Art Therapy
Authors: Bambi Ward
Abstract:
Children of Holocaust survivors have been described as inheriting their parents’ trauma as a result of ‘vicarious memory’. The term refers to a process whereby second generation Holocaust survivors subconsciously remember aspects of Holocaust trauma, despite not having directly experienced it. This can occur even when there has been a conspiracy of silence in which survivors chose not to discuss the Holocaust with their children. There are still people born in various parts of the world such as Poland, Hungary, other parts of Europe, USA, Canada and Australia, who have only learnt of their Jewish roots as adults. This discovery may occur during a parent’s deathbed confession, or when an adult child is sorting through the personal belongings of a deceased family member. Some Holocaust survivors chose to deny their Jewish heritage and raise their children as Christians. Reasons for this decision include the trauma experienced during the Holocaust for simply being Jewish, the existence of anti-Semitism, and the desire to protect one’s self and one’s family. Although there has been considerable literature written about the transgenerational impact of trauma on children of Holocaust survivors, there has been little scholarly investigation into the effects of a hidden Jewish identity on these children. This paper presents a case study of an adult child of Hungarian Holocaust survivors who was raised as a Christian. At the age of eight she was told about her family’s Jewish background, but her parents insisted that she keep this a secret, even if asked directly. She honoured their request until she turned forty. By that time she had started the challenging process of reclaiming her Jewish identity. The paper outlines the tension between family loyalty and individual freedom, and discusses the role that art and art therapy played in assisting the subject of the case study to reclaim her Jewish identity and commence writing a memoir about her spiritual journey. The main methodology used in this case study is creative practice-led research. Particular attention is paid to the utilisation of an autoethnographic approach. The autoethnographic tools used include reflective journals of the subject of the case study. These journals reflect on the subject’s collection of autobiographical data relating to her family history, and include memories, drawings, products of art therapy, diaries, letters, photographs, home movies, objects, and oral history interviews with her mother. The case study illustrates how art and art therapy benefitted a second generation Holocaust survivor who was brought up having to suppress her Jewish identity. The process allowed her to express subconscious thoughts and feelings about her identity and free herself from the burden of the long term secret she had been carrying. The process described may also be of assistance to other traumatised people who have been trying to break the silence and who are seeking to express themselves in a positive and healing way.Keywords: art, hidden identity, holocaust, silence
Procedia PDF Downloads 2394105 Simultaneous Adsorption and Characterization of NOx and SOx Emissions from Power Generation Plant on Sliced Porous Activated Carbon Prepared by Physical Activation
Authors: Muhammad Shoaib, Hassan M. Al-Swaidan
Abstract:
Air pollution has been a major challenge for the scientists today, due to the release of toxic emissions from various industries like power plants, desalination plants, industrial processes and transportation vehicles. Harmful emissions into the air represent an environmental pressure that reflects negatively on human health and productivity, thus leading to a real loss in the national economy. Variety of air pollutants in the form of carbon oxides, hydrocarbons, nitrogen oxides, sulfur oxides, suspended particulate material etc. are present in air due to the combustion of different types of fuels like crude oil, diesel oil and natural gas. Among various pollutants, NOx and SOx emissions are considered as highly toxic due to its carcinogenicity and its relation with various health disorders. In Kingdom of Saudi Arabia electricity is generated by burning of crude, diesel or natural gas in the turbines of electricity stations. Out of these three, crude oil is used extensively for electricity generation. Due to the burning of the crude oil there are heavy contents of gaseous pollutants like sulfur dioxides (SOx) and nitrogen oxides (NOx), gases which are ultimately discharged in to the environment and is a serious environmental threat. The breakthrough point in case of lab studies using 1 gm of sliced activated carbon adsorbant comes after 20 and 30 minutes for NOx and SOx, respectively, whereas in case of PP8 plant breakthrough point comes in seconds. The saturation point in case of lab studies comes after 100 and 120 minutes and for actual PP8 plant it comes after 60 and 90 minutes for NOx and SOx adsorption, respectively. Surface characterization of NOx and SOx adsorption on SAC confirms the presence of peaks in the FT-IR spectrum. CHNS study verifies that the SAC is suitable for NOx and SOx along with some other C and H containing compounds coming out from stack emission stream from the turbines of a power plant.Keywords: activated carbon, flue gases, NOx and SOx adsorption, physical activation, power plants
Procedia PDF Downloads 3474104 Geochemical Studies of Mud Volcanoes Fluids According to Petroleum Potential of the Lower Kura Depression (Azerbaijan)
Authors: Ayten Bakhtiyar Khasayeva
Abstract:
Lower Kura depression is a part of the South Caspian Basin (SCB), located between the folded regions of the Greater and Lesser Caucasus. The region is characterized by thick sedimentary cover 22 km (SCB up to 30 km), high sedimentation rate, low geothermal gradient (average value corresponds to 2 °C / 100m). There is Quaternary, Pliocene, Miocene and Oligocene deposits take part in geological structure. Miocene and Oligocene deposits are opened by prospecting and exploratory wells in the areas of Kalamaddin and Garabagli. There are 25 mud volcanoes within the territory of the Lower Kura depression, which are the unique source of information about hydrocarbons contenting great depths. During the wells data research, solid erupted products and mud volcano fluids, and according to the geological and thermal characteristics of the region, it was determined that the main phase of the hydrocarbon generation (MK1-AK2) corresponds to a wide range of depths from 10 to 14 km, which corresponds to the Pliocene-Miocene sediments, and to the "oil and gas windows" according to the intended meaning of R0 ≈ 0,65-0,85%. Fluids of mud volcanoes comprise by the following phases - gas, water. Gas phase consists mainly of methane (99%) of heavy hydrocarbons (С2+ hydrocarbons), CO2, N2, inert components He, Ar. The content of the С2+ hydrocarbons in the gases of mud volcanoes associated with oil deposits is increased. Carbon isotopic composition of methane for the Lower Kura depression varies from -40 ‰ to -60 ‰. Water of mud volcanoes are represented by all four genetic types. However the most typical types of water are HCN type. According to the Mg-Li geothermometer formation of mud waters corresponds to the temperature range from 20 °C to 140 °C (PC2). The solid product emissions of mud volcanoes identified 90 minerals and 30 trace elements. As a result geochemical investigation, thermobaric and geological conditions, zone oil and gas generation - the prospect of the Lower Kura depression is projected to depths greater than 10 km.Keywords: geology, geochemistry, mud volcanoes, petroleum potential
Procedia PDF Downloads 3664103 Microgrid Design Under Optimal Control With Batch Reinforcement Learning
Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion
Abstract:
Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.Keywords: batch-constrained reinforcement learning, control, design, optimal
Procedia PDF Downloads 1234102 Harnessing the Generation of Ferromagnetic and Silver Nanostructures from Tropical Aquatic Microbial Nanofactories
Authors: Patricia Jayshree Jacob, Mas Jaffri Masarudinb, Mohd Zobir Hussein, Raha Abdul Rahim
Abstract:
Iron based ferromagnetic nanoparticles (IONP) and silver nanostructures (AgNP) have found a wide range of application in antimicrobial therapy, cell targeting, and environmental applications. As such, the design of well-defined monodisperse IONPs and AgNPs have become an essential tool in nanotechnology. Fabrication of these nanostructures using conventional methods is not environmentally conducive and weigh heavily on energy and outlays. Selected microorganisms possess the innate ability to reduce metallic ions in colloidal aqueous solution to generate nanoparticles. Hence, harnessing this potential is a way forward in constructing microbial nano-factories, capable of churning out high yields of well-defined IONP’s and AgNP's with physicochemical characteristics on par with the best synthetically produced nanostructures. In this paper, we report the isolation and characterization of bacterial strains isolated from the tropical marine and freshwater ecosystems of Malaysia that demonstrated facile and rapid generation of ferromagnetic nanoparticles and silver nanostructures when precursors such as FeCl₃.6H₂O and AgNO₃ were added to the cell-free bacterial lysate in colloidal solution. Characterization of these nanoparticles was carried out using FESEM, UV Spectrophotometer, XRD, DLS and FTIR. This aerobic bioprocess was carried out at ambient temperature and humidity and has the potential to be developed for environmental friendly, cost effective large scale production of IONP’s. A preliminary bioprocess study on the harvesting time, incubation temperature and pH was also carried out to determine pertinent abiotic parameters contributing to the optimal production of these nanostructures.Keywords: iron oxide nanoparticles, silver nanoparticles, biosynthesis, aquatic bacteria
Procedia PDF Downloads 2854101 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 173