Search results for: blind signal separation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3069

Search results for: blind signal separation

309 PYURF and ZED9 Have a Prominent Role in Association with Molecular Pathways in Bortezomib in Myeloma Cells in Acute Myeloid Leukemia

Authors: Atena Sadat Hosseini, Mohammadhossein Habibi

Abstract:

Acute myeloid leukemia (AML) is the most typically diagnosed leukemia. In older adults, AML imposes a dismal outcome. AML originates with a dominant mutation, then adds collaborative, transformative mutations leading to myeloid transformation and clinical/biological heterogeneity. Several chemotherapeutic drugs are used for this cancer. These drugs are naturally associated with several side effects, and finding a more accurate molecular mechanism of these drugs can have a significant impact on the selection and better candidate of drugs for treatment. In this study, we evaluated bortezomibin myeloma cells using bioinformatics analysis and evaluation of RNA-Seq data. Then investigated the molecular pathways proteins- proteins interactions associated with this chemotherapy drug. A total of 658upregulated genes and 548 downregulated genes were sorted.AUF1 (hnRNP D0) binds and destabilizes mRNA, degradation of GLI2 by the proteasome, the role of GTSE1 in G2/M progression after G2 checkpoint, TCF dependent signaling in response to WNT demonstrated in upregulated genes. Besides insulin resistance, AKT phosphorylates targets in the nucleus, cytosine methylation, Longevity regulating pathway, and Signal Transduction of S1P Receptor were related to low expression genes. With respect to this results, HIST2H2AA3, RP11-96O20.4, ZED9, PRDX1, and DOK2, according to node degrees and betweenness elements candidates from upregulated genes. in the opposite side, PYURF, NRSN1, FGF23, UPK3BL, and STAG3 were a prominent role in downregulated genes. Sum up, Using in silico analysis in the present study, we conducted a precise study ofbortezomib molecular mechanisms in myeloma cells. so that we could take further evaluation to discovermolecular cancer therapy. Naturally, more additional experimental and clinical procedures are needed in this survey.

Keywords: myeloma cells, acute myeloid leukemia, bioinformatics analysis, bortezomib

Procedia PDF Downloads 71
308 The Voluntary Audit of Semi-Annual Consolidated Financial Statements Decision and Accounting Conservatism

Authors: Shuofen Hsu, Ya-Yi Chao, Chao-Wei Li

Abstract:

This paper investigates the relationship between voluntary audit (hereafter, VA) of semi-annual consolidated financial statements decision and accounting conservatism. In general, there are four kinds of auditors' assurance services, which include audit, review, agreed-upon procedure and compliance engagements base on degree of assurance. The VA work by auditors may not only have the higher audit quality but an important signal of more reliable information than the review work. In Taiwan, The listed companies must prepare the semi-annual consolidated financial statements and with auditors' review before 2012, but some of the listed companies choose the assurance work from review to audit voluntarily. Due to the adoption of International Financial Reporting Standards, the listed companies were required to prepare the second quarterly consolidated financial statements which should be reviewed by auditors since 2013. This rule will change some of the assurance work from audit to review by auditors, and the information asymmetry maybe increased. To control the selection bias, we use two-stage model to test the relationship between VA decision and accounting conservatism. Our empirical results indicate that the VA decision and accounting conservatism have a significant positive relationship in firms with family-controlled. That is, firms with family-controlled are more likely to do VA and to prepare more conservative consolidated financial statements to reduce the information asymmetry, meaning that there is a complementary effect between VA and accounting conservatism for firms with more information asymmetry. But on the contrary, we find that the VA decision and accounting conservatism have a significant negative relationship in firms with professional managers-controlled, meaning that there is a substitution effect between VA and accounting conservatism for firms with less information asymmetry. Finally, the accounting conservatism of consolidated financial statements decrease after the adoption of IFRSs (International Financial Reporting Standards) in Taiwan. It means that the disclosure and transparency of consolidated financial statements had be improved.

Keywords: voluntary audit, accounting conservatism, audit quality, information asymmetry

Procedia PDF Downloads 204
307 Habitat-Specific Divergences in the Gene Repertoire among the Reference Prevotella Genomes of the Human Microbiome

Authors: Vinod Kumar Gupta, Narendrakumar M. Chaudhari, Suchismitha Iskepalli, Chitra Dutta

Abstract:

Background-The community composition of the human microbiome is known to vary at distinct anatomical niches. But little is known about the nature of variations if any, at the genome/sub-genome levels of a specific microbial community across different niches. The present report aims to explore, as a case study, the variations in gene repertoire of 28 Prevotella reference draft genomes derived from different body-sites of human, as reported earlier by the Human Microbiome Consortium. Results-The analysis reveals the exclusive presence of 11798, 3673, 3348 and 934 gene families and exclusive absence of 17, 221, 115 and 645 gene families in Prevotella genomes derived from the human oral cavity, gastro-intestinal tracts (GIT), urogenital tract (UGT) and skin, respectively. The pan-genome for Prevotella remains “open”. Distribution of various functional COG categories differs appreciably among the habitat-specific genes, within Prevotella pan-genome and between the GIT-derived Bacteroides and Prevotella. The skin and GIT isolates of Prevotella are enriched in singletons involved in Signal transduction mechanisms, while the UGT and oral isolates show higher representation of the Defense mechanisms category. No niche-specific variations could be observed in the distribution of KEGG pathways. Conclusion-Prevotella may have developed distinct genetic strategies for adaptation to different anatomical habitats through selective, niche-specific acquisition and elimination of suitable gene-families. In addition, individual microorganisms tend to develop their own distinctive adaptive stratagems through large repertoires of singletons. Such in situ, habitat-driven refurbishment of the genetic makeup can impart substantial intra-lineage genome diversity within the microbes without perturbing their general taxonomic heritage.

Keywords: body niche adaptation, human microbiome, pangenome, Prevotella

Procedia PDF Downloads 232
306 Comparison of Direction of Arrival Estimation Method for Drone Based on Phased Microphone Array

Authors: Jiwon Lee, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Drones were first developed for military use and were used in World War 1. But recently drones have been used in a variety of fields. Several companies actively utilize drone technology to strengthen their services, and in agriculture, drones are used for crop monitoring and sowing. Other people use drones for hobby activities such as photography. However, as the range of use of drones expands rapidly, problems caused by drones such as improperly flying, privacy and terrorism are also increasing. As the need for monitoring and tracking of drones increases, researches are progressing accordingly. The drone detection system estimates the position of the drone using the physical phenomena that occur when the drones fly. The drone detection system measures being developed utilize many approaches, such as radar, infrared camera, and acoustic detection systems. Among the various drone detection system, the acoustic detection system is advantageous in that the microphone array system is small, inexpensive, and easy to operate than other systems. In this paper, the acoustic signal is acquired by using minimum microphone when drone is flying, and direction of drone is estimated. When estimating the Direction of Arrival(DOA), there is a method of calculating the DOA based on the Time Difference of Arrival(TDOA) and a method of calculating the DOA based on the beamforming. The TDOA technique requires less number of microphones than the beamforming technique, but is weak in noisy environments and can only estimate the DOA of a single source. The beamforming technique requires more microphones than the TDOA technique. However, it is strong against the noisy environment and it is possible to simultaneously estimate the DOA of several drones. When estimating the DOA using acoustic signals emitted from the drone, it is impossible to measure the position of the drone, and only the direction can be estimated. To overcome this problem, in this work we show how to estimate the position of drones by arranging multiple microphone arrays. The microphone array used in the experiments was four tetrahedral microphones. We simulated the performance of each DOA algorithm and demonstrated the simulation results through experiments.

Keywords: acoustic sensing, direction of arrival, drone detection, microphone array

Procedia PDF Downloads 132
305 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 45
304 Reduction of Specific Energy Consumption in Microfiltration of Bacillus velezensis Broth by Air Sparging and Turbulence Promoter

Authors: Jovana Grahovac, Ivana Pajcin, Natasa Lukic, Jelena Dodic, Aleksandar Jokic

Abstract:

To obtain purified biomass to be used in the plant pathogen biocontrol or as soil biofertilizer, it is necessary to eliminate residual broth components at the end of the fermentation process. The main drawback of membrane separation techniques is permeate flux decline due to the membrane fouling. Fouling mitigation measures increase the pressure drop along membrane channel due to the increased resistance to flow of the feed suspension, thus increasing the hydraulic power drop. At the same time, these measures lead to an increase in the permeate flux due to the reduced resistance of the filtration cake on the membrane surface. Because of these opposing effects, the energy efficiency of fouling mitigation measures is limited, and the justification of its application is provided by information on a reducing specific energy consumption compared to a case without any measures employed. In this study, the influence of static mixer (Kenics) and air-sparging (two-phase flow) on reduction of specific energy consumption (ER) was investigated. Cultivation Bacillus velezensis was carried out in the 3-L bioreactor (Biostat® Aplus) containing 2 L working volume with two parallel Rushton turbines and without internal baffles. Cultivation was carried out at 28 °C on at 150 rpm with an aeration rate of 0.75 vvm during 96 h. The experiments were carried out in a conventional cross-flow microfiltration unit. During experiments, permeate and retentate were recycled back to the broth vessel to simulate continuous process. The single channel ceramic membrane (TAMI Deutschland) used had a nominal pore size 200 nm with the length of 250 mm and an inner/external diameter of 6/10 mm. The useful membrane channel surface was 4.33×10⁻³ m². Air sparging was brought by the pressurized air connected by a three-way valve to the feed tube by a simple T-connector without diffusor. The different approaches to flux improvement are compared in terms of energy consumption. Reduction of specific energy consumption compared to microfiltration without fouling mitigation is around 49% and 63%, for use of two-phase flow and a static mixer, respectively. In the case of a combination of these two fouling mitigation methods, ER is 60%, i.e., slightly lower compared to the use of turbulence promoter alone. The reason for this result can be found in the fact that flux increase is more affected by the presence of a Kenics static mixer while sparging results in an increase of energy used during microfiltration. By comparing combined method with turbulence promoter flux enhancement method ER is negative (-7%) which can be explained by increased power consumption for air flow with moderate contribution to the flux increase. Another confirmation for this fact can be found by comparing energy consumption values for combined method with energy consumption in the case of two-phase flow. In this instance energy reduction (ER) is 22% that demonstrates that turbulence promoter is more efficient compared to two phase flow. Antimicrobial activity of Bacillus velezensis biomass against phytopathogenic isolates Xanthomonas campestris was preserved under different fouling reduction methods.

Keywords: Bacillus velezensis, microfiltration, static mixer, two-phase flow

Procedia PDF Downloads 98
303 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks

Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee

Abstract:

Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.

Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)

Procedia PDF Downloads 68
302 Quantification and Evaluation of Tumors Heterogeneity Utilizing Multimodality Imaging

Authors: Ramin Ghasemi Shayan, Morteza Janebifam

Abstract:

Tumors are regularly inhomogeneous. Provincial varieties in death, metabolic action, multiplication and body part are watched. There’s expanding proof that strong tumors may contain subpopulations of cells with various genotypes and phenotypes. These unmistakable populaces of malignancy cells can connect during a serious way and may contrast in affectability to medications. Most tumors show organic heterogeneity1–3 remembering heterogeneity for genomic subtypes, varieties inside the statement of development variables and genius, and hostile to angiogenic factors4–9 and varieties inside the tumoural microenvironment. These can present as contrasts between tumors in a few people. for instance, O6-methylguanine-DNA methyltransferase, a DNA fix compound, is hushed by methylation of the quality advertiser in half of glioblastoma (GBM), adding to chemosensitivity, and improved endurance. From the outset, there includes been specific enthusiasm inside the usage of dissemination weighted imaging (DWI) and dynamic complexity upgraded MRI (DCE-MRI). DWI sharpens MRI to water dispersion inside the extravascular extracellular space (EES) and is wiped out with the size and setup of the cell populace. Additionally, DCE-MRI utilizes dynamic obtaining of pictures during and after the infusion of intravenous complexity operator. Signal changes are additionally changed to outright grouping of differentiation permitting examination utilizing pharmacokinetic models. PET scan modality gives one of a kind natural particularity, permitting dynamic or static imaging of organic atoms marked with positron emanating isotopes (for example, 15O, 18F, 11C). The strategy is explained to a colossal radiation portion, which points of confinement rehashed estimations, particularly when utilized together with PC tomography (CT). At long last, it's of incredible enthusiasm to quantify territorial hemoglobin state, which could be joined with DCE-CT vascular physiology estimation to create significant experiences for understanding tumor hypoxia.

Keywords: heterogeneity, computerized tomography scan, magnetic resonance imaging, PET

Procedia PDF Downloads 121
301 A Technology of Hot Stamping and Welding of Carbon Reinforced Plastic Sheets Using High Electric Resistance

Authors: Tomofumi Kubota, Mitsuhiro Okayasu

Abstract:

In recent years, environmental problems and energy problems typified by global warming are intensifying, and transportation devices are required to reduce the weight of structural materials from the viewpoint of strengthening fuel efficiency regulations and energy saving. Carbon fiber reinforced plastic (CFRP) used in this research is attracting attention as a structural material to replace metallic materials. Among them, thermoplastic CFRP is expected to expand its application range in terms of recyclability and cost. High formability and weldability of the unidirectional CFRP sheets conducted by a proposed hot stamping process were proposed, in which the carbon fiber reinforced plastic sheets are heated by a designed technique. In this case, the CFRP sheets are heated by the high electric voltage applied through carbon fibers. In addition, the electric voltage was controlled by the area ratio of exposed carbon fiber on the sample surfaces. The lower exposed carbon fiber on the sample surface makes high electric resistance leading to the high sample temperature. In this case, the CFRP sheets can be heated to more than 150 °C. With the sample heating, the stamping and welding technologies can be carried out. By changing the sample temperature, the suitable stamping condition can be detected. Moreover, the proper welding connection of the CFRP sheets was proposed. In this study, we propose a fusion bonding technique using thermoplasticity, high current flow, and heating caused by electrical resistance. This technology uses the principle of resistance spot welding. In particular, the relationship between the carbon fiber exposure rate and the electrical resistance value that affect the bonding strength is investigated. In this approach, the mechanical connection using rivet is also conducted to make a comparison of the severity of welding. The change of connecting strength is reflected by the fracture mechanism. The low and high connecting strength are obtained for the separation of two CFRP sheets and fractured inside the CFRP sheet, respectively. In addition to the two fracture modes, micro-cracks in CFRP are also detected. This approach also includes mechanical connections using rivets to compare the severity of the welds. The change in bond strength is reflected by the destruction mechanism. Low and high bond strengths were obtained to separate the two CFRP sheets, each broken inside the CFRP sheets. In addition to the two failure modes, micro cracks in CFRP are also detected. In this research, from the relationship between the surface carbon fiber ratio and the electrical resistance value, it was found that different carbon fiber ratios had similar electrical resistance values. Therefore, we investigated which of carbon fiber and resin is more influential to bonding strength. As a result, the lower the carbon fiber ratio, the higher the bonding strength. And this is 50% better than the conventional average strength. This can be evaluated by observing whether the fracture mode is interface fracture or internal fracture.

Keywords: CFRP, hot stamping, weliding, deforamtion, mechanical property

Procedia PDF Downloads 103
300 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death

Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar

Abstract:

In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.

Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death

Procedia PDF Downloads 321
299 Molecular Characterization of Grain Storage Proteins in Some Hordeum Species

Authors: Manar Makhoul, Buthainah Alsalamah, Salam Lawand, Hassan Azzam

Abstract:

The major storage proteins in endosperm of 33 cultivated and wild barley genotypes (H.vulgare, H. spontaneum, H. bulbosum, H. murinum, H. marinum) were analyzed to demonstrate the variation in the hordein polypeptides encoded by multigene families in grains. The SDS-PAGE revealed 13 and 17 alleles at the Hor1 and the Hor2 loci respectively, with frequencies from 0.83 to 14 and 0.56 to 13.41% respectively, while seven alleles at the Hor3 locus with frequencies from 3.63 to 30.91% were recognized. The phylogenetic analysis indicated to relevance of the polymorphism in hordein patterns as successful tool in identifying the individual genotypes and discriminating the species according to genome type. We also reported in this research complete nucleotide sequence B-hordein genes of seven wild and cultivated barley genotypes. A 152bp upstream sequence of B-hordein promoter contained a TATA box, CATC box, AAAG motif, N-motif and E-motif. In silico analysis of B-Hordein sequences demonstrated that the coding regions were not interrupted by any intron, and included the complete ORF which varied between 882 and 906 bp, and encoded mature proteins with 293-301 residues characterized by high contents of glutamine (29%), and proline (18%). Comparison of the predicted polypeptide sequences with the published ones suggested that all S-rich prolamins genes are descended from common ancestor. The sequence started at N-terminal with a signal peptide, and then followed directly by two domains; a repetitive one based on the repetition of the repeat unit PQQPFPQQ and C-terminal domain. Also, it was found that positions of the eight cysteine residues were highly conserved in all the B-hordein sequences, but Hordeum bulbosum had additional unpaired one. The phylogenetic tree of B-hordein polypeptide separated the genotypes in distinct seven subgroups. In general, the high homology between B-hordeins and LMW glutenin subunits suggests similar bread-making influences for these B-hordeins.

Keywords: hordeum, phylogenetic tree, sequencing, storage protein

Procedia PDF Downloads 236
298 Carbon Footprint Assessment and Application in Urban Planning and Geography

Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim

Abstract:

Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.

Keywords: carbon footprint, case study, geography, urban planning

Procedia PDF Downloads 269
297 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 41
296 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 262
295 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 235
294 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.

Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC

Procedia PDF Downloads 217
293 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis

Authors: Alexander A. Tokmakov

Abstract:

Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.

Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins

Procedia PDF Downloads 389
292 Taguchi-Based Surface Roughness Optimization for Slotted and Tapered Cylindrical Products in Milling and Turning Operations

Authors: Vineeth G. Kuriakose, Joseph C. Chen, Ye Li

Abstract:

The research follows a systematic approach to optimize the parameters for parts machined by turning and milling processes. The quality characteristic chosen is surface roughness since the surface finish plays an important role for parts that require surface contact. A tapered cylindrical surface is designed as a test specimen for the research. The material chosen for machining is aluminum alloy 6061 due to its wide variety of industrial and engineering applications. HAAS VF-2 TR computer numerical control (CNC) vertical machining center is used for milling and HAAS ST-20 CNC machine is used for turning in this research. Taguchi analysis is used to optimize the surface roughness of the machined parts. The L9 Orthogonal Array is designed for four controllable factors with three different levels each, resulting in 18 experimental runs. Signal to Noise (S/N) Ratio is calculated for achieving the specific target value of 75 ± 15 µin. The controllable parameters chosen for turning process are feed rate, depth of cut, coolant flow and finish cut and for milling process are feed rate, spindle speed, step over and coolant flow. The uncontrollable factors are tool geometry for turning process and tool material for milling process. Hypothesis testing is conducted to study the significance of different uncontrollable factors on the surface roughnesses. The optimal parameter settings were identified from the Taguchi analysis and the process capability Cp and the process capability index Cpk were improved from 1.76 and 0.02 to 3.70 and 2.10 respectively for turning process and from 0.87 and 0.19 to 3.85 and 2.70 respectively for the milling process. The surface roughnesses were improved from 60.17 µin to 68.50 µin, reducing the defect rate from 52.39% to 0% for the turning process and from 93.18 µin to 79.49 µin, reducing the defect rate from 71.23% to 0% for the milling process. The purpose of this study is to efficiently utilize the Taguchi design analysis to improve the surface roughness.

Keywords: surface roughness, Taguchi parameter design, CNC turning, CNC milling

Procedia PDF Downloads 131
291 Acoustic Emission for Tool-Chip Interface Monitoring during Orthogonal Cutting

Authors: D. O. Ramadan, R. S. Dwyer-Joyce

Abstract:

The measurement of the interface conditions in a cutting tool contact is essential information for performance monitoring and control. This interface provides the path for the heat flux to the cutting tool. This elevate in the cutting tool temperature leads to motivate the mechanism of tool wear, thus affect the life of the cutting tool and the productivity. This zone is representative by the tool-chip interface. Therefore, understanding and monitoring this interface is considered an important issue in machining. In this paper, an acoustic emission (AE) technique was used to find the correlation between AE parameters and the tool-chip interface. For this reason, a response surface design (RSD) has been used to analyse and optimize the machining parameters. The experiment design was based on the face centered, central composite design (CCD) in the Minitab environment. According to this design, a series of orthogonal cutting experiments for different cutting conditions were conducted on a Triumph 2500 lathe machine to study the sensitivity of the acoustic emission (AE) signal to change in tool-chip contact length. The cutting parameters investigated were the cutting speed, depth of cut, and feed and the experiments were performed for 6082-T6 aluminium tube. All the orthogonal cutting experiments were conducted unlubricated. The tool-chip contact area was investigated using a scanning electron microscope (SEM). The results obtained in this paper indicate that there is a strong dependence of the root mean square (RMS) on the cutting speed, where the RMS increases with increasing the cutting speed. A dependence on the tool-chip contact length has been also observed. However there was no effect observed of changing the cutting depth and feed on the RMS. These dependencies have been clarified in terms of the strain and temperature in the primary and secondary shear zones, also the tool-chip sticking and sliding phenomenon and the effect of these mechanical variables on dislocation activity at high strain rates. In conclusion, the acoustic emission technique has the potential to monitor in situ the tool-chip interface in turning and consequently could indicate the approaching end of life of a cutting tool.

Keywords: Acoustic emission, tool-chip interface, orthogonal cutting, monitoring

Procedia PDF Downloads 464
290 A Microsurgery-Specific End-Effector Equipped with a Bipolar Surgical Tool and Haptic Feedback

Authors: Hamidreza Hoshyarmanesh, Sanju Lama, Garnette R. Sutherland

Abstract:

In tele-operative robotic surgery, an ideal haptic device should be equipped with an intuitive and smooth end-effector to cover the surgeon’s hand/wrist degrees of freedom (DOF) and translate the hand joint motions to the end-effector of the remote manipulator with low effort and high level of comfort. This research introduces the design and development of a microsurgery-specific end-effector, a gimbal mechanism possessing 4 passive and 1 active DOFs, equipped with a bipolar forceps and haptic feedback. The robust gimbal structure is comprised of three light-weight links/joint, pitch, yaw, and roll, each consisting of low-friction support and a 2-channel accurate optical position sensor. The third link, which provides the tool roll, was specifically designed to grip the tool prongs and accommodate a low mass geared actuator together with a miniaturized capstan-rope mechanism. The actuator is able to generate delicate torques, using a threaded cylindrical capstan, to emulate the sense of pinch/coagulation during conventional microsurgery. While the tool left prong is fixed to the rolling link, the right prong bears a miniaturized drum sector with a large diameter to expand the force scale and resolution. The drum transmits the actuator output torque to the right prong and generates haptic force feedback at the tool level. The tool is also equipped with a hall-effect sensor and magnet bar installed vis-à-vis on the inner side of the two prongs to measure the tooltip distance and provide an analogue signal to the control system. We believe that such a haptic end-effector could significantly increase the accuracy of telerobotic surgery and help avoid high forces that are known to cause bleeding/injury.

Keywords: end-effector, force generation, haptic interface, robotic surgery, surgical tool, tele-operation

Procedia PDF Downloads 98
289 Prevalence, Antimicrobial Susceptibility Pattern and Public Health Significance for Staphylococcus aureus of Isolated From Raw Red Meat at Butchery and Abattoir House in Mekelle, Northern Ethiopia

Authors: Haftay Abraha Tadesse

Abstract:

Background: Staphylococcus is a genus of worldwide distributed bacteria correlated to several infectious of different sites in human and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. Objective: The objective of this study was to determine the isolates, antimicrobial susceptibility patterns and public health significance for Staphylococcus aureus in raw meat from butchery and abattoir houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from April to October 2019. Sociodemographic data and public health significance were collected using predesigned questionnaire. The raw meat samples were collected aseptically in the butchery and abattoir houses and transported using ice box to Mekelle University, College of Veterinary Sciences for isolating and identification of Staphylococcus aureus. Antimicrobial susceptibility tests were determined by disc diffusion method. Data obtained were cleaned and entered in to STATA 22.0 and logistic regression model with odds ratio were calculated to assess the association of risk factors with bacterial contamination. P-value < 0.05 was considered as statistically significant. Results: In present study, 88 out of 250 (35.2%) were found to be contamination with Staphylococcus aureus. Among the raw meat specimens to be positivity rate of Staphylococcus aureus were 37.6% (n=47) and (32.8% (n=41), butchery and abattoir houses, respectively. Among the associated risk factories not using gloves reduces risk was found to (AOR=0.222; 95% CI: 0.104-0.473), Strict Separation b/n clean & dirty (AOR= 1.37; 95% CI: 0.66-2.86) and poor habit of hand washing (AOR=1.08; 95%CI: 0.35-3.35) were found to be statistically significant and ha ve associated with Staphylococcus aureus contamination. All isolates thirty sevevn of Staphyloco ccus aureus were checked displayed (100%) sensitive to doxycycline, trimethoprim, gentamicin, sulphamethoxazole, amikacin, CN, Co trimoxazole and nitrofurantoi. whereas the showed resistance of cefotaxime (100%), ampicillin (87.5%), Penicillin (75%), B (75%), and nalidixic acid (50%) from butchery houses. On the other hand, all isolates of Staphylococcus aur eu isolate 100% (n= 10) showed sensitive chloramphenicol, gentamicin and nitrofurantoin whereas the showed 100% resistance of Penicillin, B, AMX, ceftriaxone, ampicillin and cefotaxime from abattoirs houses. The overall multi drug resistance pattern for Staphylococcus aureus were 90% and 100% of butchery and abattoirs houses, respectively. Conclusion: 35.3% Staphylococcus aureus isolated were recovered from the raw meat samples collected from the butchery and abattoirs houses. More has to be done in the developed of hand washing behavior, and availability of safe water in the butchery houses to reduce burden of bacterial contamination. The results of the present finding highlight the need to implement protective measures against the levels of food contamination and alternative drug options. The development of antimicrobial resistance is nearly always as a result of repeated therapeutic and/or indiscriminate use of them. Regular antimicrobial sensitivity testing helps to select effective antibiotics and to reduce the problems of drug resistance development towards commonly used antibiotics. Key words: abattoir houses, antimicrobial resistance, butchery houses, Ethiopia,

Keywords: abattoir houses, antimicrobial resistance, butchery houses, Ethiopia, staphylococcus aureuse, MDR

Procedia PDF Downloads 39
288 Analyzing Safety Incidents using the Fatigue Risk Index Calculator as an Indicator of Fatigue within a UK Rail Franchise

Authors: Michael Scott Evans, Andrew Smith

Abstract:

The feeling of fatigue at work could potentially have devastating consequences. The aim of this study was to investigate whether the well-established objective indicator of fatigue – the Fatigue Risk Index (FRI) calculator used by the rail industry is an effective indicator to the number of safety incidents, in which fatigue could have been a contributing factor. The study received ethics approval from Cardiff University’s Ethics Committee (EC.16.06.14.4547). A total of 901 safety incidents were recorded from a single British rail franchise between 1st June 2010 – 31st December 2016, into the Safety Management Information System (SMIS). The safety incident types identified that fatigue could have been a contributing factor were: Signal Passed at Danger (SPAD), Train Protection & Warning System (TPWS) activation, Automatic Warning System (AWS) slow to cancel, failed to call, and station overrun. From the 901 recorded safety incidents, the scheduling system CrewPlan was used to extract the Fatigue Index (FI) score and Risk Index (RI) score of all train drivers on the day of the safety incident. Only the working rosters of 64.2% (N = 578) (550 men and 28 female) ranging in age from 24 – 65 years old (M = 47.13, SD = 7.30) were accessible for analyses. Analysis from all 578 train drivers who were involved in safety incidents revealed that 99.8% (N = 577) of Fatigue Index (FI) scores fell within or below the identified guideline threshold of 45 as well as 97.9% (N = 566) of Risk Index (RI) scores falling below the 1.6 threshold range. Their scores represent good practice within the rail industry. These findings seem to indicate that the current objective indicator, i.e. the FRI calculator used in this study by the British rail franchise was not an effective predictor of train driver’s FI scores and RI scores, as safety incidents in which fatigue could have been a contributing factor represented only 0.2% of FI scores and 2.1% of RI scores. Further research is needed to determine whether there are other contributing factors that could provide a better indication as to why there is such a significantly large proportion of train drivers who are involved in safety incidents, in which fatigue could have been a contributing factor have such low FI and RI scores.

Keywords: fatigue risk index calculator, objective indicator of fatigue, rail industry, safety incident

Procedia PDF Downloads 162
287 Microfabrication of Three-Dimensional SU-8 Structures Using Positive SPR Photoresist as a Sacrificial Layer for Integration of Microfluidic Components on Biosensors

Authors: Su Yin Chiam, Qing Xin Zhang, Jaehoon Chung

Abstract:

Complementary metal-oxide-semiconductor (CMOS) integrated circuits (ICs) have obtained increased attention in the biosensor community because CMOS technology provides cost-effective and high-performance signal processing at a mass-production level. In order to supply biological samples and reagents effectively to the sensing elements, there are increasing demands for seamless integration of microfluidic components on the fabricated CMOS wafers by post-processing. Although the PDMS microfluidic channels replicated from separately prepared silicon mold can be typically aligned and bonded onto the CMOS wafers, it remains challenging owing the inherently limited aligning accuracy ( > ± 10 μm) between the two layers. Here we present a new post-processing method to create three-dimensional microfluidic components using two different polarities of photoresists, an epoxy-based negative SU-8 photoresist and positive SPR220-7 photoresist. The positive photoresist serves as a sacrificial layer and the negative photoresist was utilized as a structural material to generate three-dimensional structures. Because both photoresists are patterned using a standard photolithography technology, the dimensions of the structures can be effectively controlled as well as the alignment accuracy, moreover, is dramatically improved (< ± 2 μm) and appropriately can be adopted as an alternative post-processing method. To validate the proposed processing method, we applied this technique to build cell-trapping structures. The SU8 photoresist was mainly used to generate structures and the SPR photoresist was used as a sacrificial layer to generate sub-channel in the SU8, allowing fluid to pass through. The sub-channel generated by etching the sacrificial layer works as a cell-capturing site. The well-controlled dimensions enabled single-cell capturing on each site and high-accuracy alignment made cells trapped exactly on the sensing units of CMOS biosensors.

Keywords: SU-8, microfluidic, MEMS, microfabrication

Procedia PDF Downloads 492
286 Radar Track-based Classification of Birds and UAVs

Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo

Abstract:

In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).

Keywords: birds, classification, machine learning, UAVs

Procedia PDF Downloads 188
285 Vortex Generation to Model the Airflow Downstream of a Piezoelectric Fan Array

Authors: Alastair Hales, Xi Jiang, Siming Zhang

Abstract:

Numerical methods are used to generate vortices in a domain. Through considered design, two counter-rotating vortices may interact and effectively drive one another downstream. This phenomenon is comparable to the vortex interaction that occurs in a region immediately downstream from two counter-oscillating piezoelectric (PE) fan blades. PE fans are small blades clamped at one end and driven to oscillate at their first natural frequency by an extremely low powered actuator. In operation, the high oscillation amplitude and frequency generate sufficient blade tip speed through the surrounding air to create downstream air flow. PE fans are considered an ideal solution for low power hot spot cooling in a range of small electronic devices, but a single blade does not typically induce enough air flow to be considered a direct alternative to conventional air movers, such as axial fans. The development of face-to-face PE fan arrays containing multiple blades oscillating in counter-phase to one another is essential for expanding the range of potential PE fan applications regarding the cooling of power electronics. Even in an unoptimised state, these arrays are capable of moving air volumes comparable to axial fans with less than 50% of the power demand. Replicating the airflow generated by face-to-face PE fan arrays without including the actual blades in the model reduces the process’s computational demands and enhances the rate of innovation and development in the field. Vortices are generated at a defined inlet using a time-dependent velocity profile function, which pulsates the inlet air velocity magnitude. This induces vortex generation in the considered domain, and these vortices are shown to separate and propagate downstream in a regular manner. The generation and propagation of a single vortex are compared to an equivalent vortex generated from a PE fan blade in a previous experimental investigation. Vortex separation is found to be accurately replicated in the present numerical model. Additionally, the downstream trajectory of the vortices’ centres vary by just 10.5%, and size and strength of the vortices differ by a maximum of 10.6%. Through non-dimensionalisation, the numerical method is shown to be valid for PE fan blades with differing parameters to the specific case investigated. The thorough validation methods presented verify that the numerical model may be used to replicate vortex formation from an oscillating PE fans blade. An investigation is carried out to evaluate the effects of varying the distance between two PE fan blade, pitch. At small pitch, the vorticity in the domain is maximised, along with turbulence in the near vicinity of the inlet zones. It is proposed that face-to-face PE fan arrays, oscillating in counter-phase, should have a minimal pitch to optimally cool nearby heat sources. On the other hand, downstream airflow is maximised at a larger pitch, where the vortices can fully form and effectively drive one another downstream. As such, this should be implemented when bulk airflow generation is the desired result.

Keywords: piezoelectric fans, low energy cooling, vortex formation, computational fluid dynamics

Procedia PDF Downloads 151
284 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 112
283 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control

Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin

Abstract:

The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.

Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics

Procedia PDF Downloads 51
282 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators

Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín

Abstract:

Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.

Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator

Procedia PDF Downloads 183
281 A Gold-Based Nanoformulation for Delivery of the CRISPR/Cas9 Ribonucleoprotein for Genome Editing

Authors: Soultana Konstantinidou, Tiziana Schmidt, Elena Landi, Alessandro De Carli, Giovanni Maltinti, Darius Witt, Alicja Dziadosz, Agnieszka Lindstaedt, Michele Lai, Mauro Pistello, Valentina Cappello, Luciana Dente, Chiara Gabellini, Piotr Barski, Vittoria Raffa

Abstract:

CRISPR/Cas9 technology has gained the interest of researchers in the field of biotechnology for genome editing. Since its discovery as a microbial adaptive immune defense, this system has been widely adopted and is acknowledged for having a variety of applications. However, critical barriers related to safety and delivery are persisting. Here, we propose a new concept of genome engineering, which is based on a nano-formulation of Cas9. The Cas9 enzyme was conjugated to a gold nanoparticle (AuNP-Cas9). The AuNP-Cas9 maintained its cleavage efficiency in vitro, to the same extent as the ribonucleoprotein, including non-conjugated Cas9 enzyme, and showed high gene editing efficiency in vivo in zebrafish embryos. Since CRISPR/Cas9 technology is extensively used in cancer research, melanoma was selected as a validation target. Cell studies were performed in A375 human melanoma cells. Particles per se had no impact on cell metabolism and proliferation. Intriguingly, the AuNP-Cas9 internalized spontaneously in cells and localized as a single particle in the cytoplasm and organelles. More importantly, the AuNP-Cas9 showed a high nuclear localization signal. The AuNP-Cas9, overcoming the delivery difficulties of Cas9, could be used in cellular biology and localization studies. Taking advantage of the plasmonic properties of gold nanoparticles, this technology could potentially be a bio-tool for combining gene editing and photothermal therapy in cancer cells. Further work will be focused on intracellular interactions of the nano-formulation and characterization of the optical properties.

Keywords: CRISPR/Cas9, gene editing, gold nanoparticles, nanotechnology

Procedia PDF Downloads 77
280 The Effect of Common Daily Schedule on the Human Circadian Rhythms during the Polar Day on Svalbard: Field Study

Authors: Kamila Weissova, Jitka Skrabalova, Katerina Skalova, Jana Koprivova, Zdenka Bendova

Abstract:

Any Arctic visitor has to deal with extreme conditions, including constant light during the summer season or constant darkness during winter time. Light/dark cycle is the most powerful synchronizing signal for biological clock and the absence of daily dark period during the polar day can significantly alter the functional state of the internal clock. However, the inner clock can be synchronized by other zeitgebers such as physical activity, food intake or social interactions. Here, we investigated the effect of polar day on circadian clock of 10 researchers attending the polar base station in the Svalbard region during July. The data obtained on Svalbard were compared with the data obtained before the researchers left for the expedition (in the Czech Republic). To determine the state of circadian clock we used wrist actigraphy followed by sleep diaries, saliva, and buccal mucosa samples, both collected every 4 hours during 24h-interval to detect melatonin by radioimmunoassay and clock gene (PER1, BMAL1, NR1D1, DBP) mRNA levels by RT-qPCR. The clock gene expression was analyzed using cosinor analysis. From our results, it is apparent that the constant sunlight delayed melatonin onset and postponed the physical activity in the same order. Nevertheless, the clock gene expression displayed higher amplitude on Svalbard compared to the amplitude detected in the Czech Republic. These results have suggested that the common daily schedule at the Svalbard expedition can strengthen circadian rhythm in the environment that is lacking light/dark cycle. In conclusion, the constant sunlight delays melatonin onset, but it still maintains its rhythmic secretion. The effect of constant sunlight on circadian clock can be minimalized by common daily scheduled activity.

Keywords: actighraph, clock genes, human, melatonin, polar day

Procedia PDF Downloads 145