Search results for: data sensitivity
25310 Importance of Ethics in Cloud Security
Authors: Pallavi Malhotra
Abstract:
This paper examines the importance of ethics in cloud computing. In the modern society, cloud computing is offering individuals and businesses an unlimited space for storing and processing data or information. Most of the data and information stored in the cloud by various users such as banks, doctors, architects, engineers, lawyers, consulting firms, and financial institutions among others require a high level of confidentiality and safeguard. Cloud computing offers centralized storage and processing of data, and this has immensely contributed to the growth of businesses and improved sharing of information over the internet. However, the accessibility and management of data and servers by a third party raise concerns regarding the privacy of clients’ information and the possible manipulations of the data by third parties. This document suggests the approaches various stakeholders should take to address various ethical issues involving cloud-computing services. Ethical education and training is key to all stakeholders involved in the handling of data and information stored or being processed in the cloud.Keywords: IT ethics, cloud computing technology, cloud privacy and security, ethical education
Procedia PDF Downloads 32525309 The Feminism of Data Privacy and Protection in Africa
Authors: Olayinka Adeniyi, Melissa Omino
Abstract:
The field of data privacy and data protection in Africa is still an evolving area, with many African countries yet to enact legislation on the subject. While African Governments are bringing their legislation to speed in this field, how patriarchy pervades every sector of African thought and manifests in society needs to be considered. Moreover, the laws enacted ought to be inclusive, especially towards women. This, in a nutshell, is the essence of data feminism. Data feminism is a new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism. Feminising data privacy and protection will involve thinking women, considering women in the issues of data privacy and protection, particularly in legislation, as is the case in this paper. The line of thought of women inclusion is not uncommon when even international and regional human rights specific for women only came long after the general human rights. The consideration is that these should have been inserted or rather included in the original general instruments in the first instance. Since legislation on data privacy is coming in this century, having seen the rights and shortcomings of earlier instruments, then the cue should be taken to ensure inclusive wholistic legislation for data privacy and protection in the first instance. Data feminism is arguably an area that has been scantily researched, albeit a needful one. With the spate of increase in the violence against women spiraling in the cyber world, compounding the issue of COVID-19 and the needful response of governments, and the effect of these on women and their rights, fast forward, the research on the feminism of data privacy and protection in Africa becomes inevitable. This paper seeks to answer the questions, what is data feminism in the African context, why is it important in the issue of data privacy and protection legislation; what are the laws, if any, existing on data privacy and protection in Africa, are they women inclusive, if not, why; what are the measures put in place for the privacy and protection of women in Africa, and how can this be made possible. The paper aims to investigate the issue of data privacy and protection in Africa, the legal framework, and the protection or provision that it has for women if any. It further aims to research the importance and necessity of feminizing data privacy and protection, the effect of lack of it, the challenges or bottlenecks in attaining this feat and the possibilities of accessing data privacy and protection for African women. The paper also researches the emerging practices of data privacy and protection of women in other jurisprudences. It approaches the research through the methodology of review of papers, analysis of laws, and reports. It seeks to contribute to the existing literature in the field and is explorative in its suggestion. It suggests a draft of some clauses to make any data privacy and protection legislation women inclusive. It would be useful for policymaking, academic, and public enlightenment.Keywords: feminism, women, law, data, Africa
Procedia PDF Downloads 20525308 Functionalization of Nanomaterials for Bio-Sensing Applications: Current Progress and Future Prospective
Authors: Temesgen Geremew Tefery
Abstract:
Nanomaterials, due to their unique properties, have revolutionized the field of biosensing. Their functionalization, or modification with specific molecules, is crucial for enhancing their biocompatibility, selectivity, and sensitivity. This review explores recent advancements in nanomaterial functionalization for biosensing applications. We discuss various strategies, including covalent and non-covalent modifications, and their impact on biosensor performance. The use of biomolecules like antibodies, enzymes, and nucleic acids for targeted detection is highlighted. Furthermore, the integration of nanomaterials with different sensing modalities, such as electrochemical, optical, and mechanical, is examined. The future outlook for nanomaterial-based biosensing is promising, with potential applications in healthcare, environmental monitoring, and food safety. However, challenges related to biocompatibility, scalability, and cost-effectiveness need to be addressed. Continued research and development in this area will likely lead to even more sophisticated and versatile biosensing technologies.Keywords: biosensing, nanomaterials, biotechnology, nanotechnology
Procedia PDF Downloads 2725307 Evaluation of Practicality of On-Demand Bus Using Actual Taxi-Use Data through Exhaustive Simulations
Authors: Jun-ichi Ochiai, Itsuki Noda, Ryo Kanamori, Keiji Hirata, Hitoshi Matsubara, Hideyuki Nakashima
Abstract:
We conducted exhaustive simulations for data assimilation and evaluation of service quality for various setting in a new shared transportation system, called SAVS. Computational social simulation is a key technology to design recent social services like SAVS as new transportation service. One open issue in SAVS was to determine the service scale through the social simulation. Using our exhaustive simulation framework, OACIS, we did data-assimilation and evaluation of effects of SAVS based on actual tax-use data at Tajimi city, Japan. Finally, we get the conditions to realize the new service in a reasonable service quality.Keywords: on-demand bus sytem, social simulation, data assimilation, exhaustive simulation
Procedia PDF Downloads 32125306 A Numerical Model Simulation for an Updraft Gasifier Using High-Temperature Steam
Authors: T. M. Ismail, M. A. El-Salam
Abstract:
A mathematical model study was carried out to investigate gasification of biomass fuels using high-temperature air and steam as a gasifying agent using high-temperature air up to 1000°C. In this study, a 2D computational fluid dynamics model was developed to study the gasification process in an updraft gasifier, considering drying, pyrolysis, combustion, and gasification reactions. The gas and solid phases were resolved using a Euler−Euler multiphase approach, with exchange terms for the momentum, mass, and energy. The standard k−ε turbulence model was used in the gas phase, and the particle phase was modeled using the kinetic theory of granular flow. The results show that the present model giving a promising way in its capability and sensitivity for the parameter effects that influence the gasification process.Keywords: computational fluid dynamics, gasification, biomass fuel, fixed bed gasifier
Procedia PDF Downloads 40625305 Optimal Rotor Design of an 150kW-Class IPMSM through the 3D Voltage-Inductance Map Analysis Method
Authors: Eung-Seok Park, Tae-Chul Jeong, Hyun-Jong Park, Hyun-Woo Jun, Dong-Woo Kang, Ju Lee
Abstract:
This presents a methodology to determine detail design directions of an 150kW-class IPMSM (interior permanent magnet synchronous motor) and its detail design. The basic design of the stator and rotor was conducted. After dividing the designed models into the best cases and the worst cases based on rotor shape parameters, Sensitivity analysis and 3D Voltage-Inductance Map (3D EL-Map) parameters were analyzed. Then, the design direction for the final model was predicted. Based on the prediction, the final model was extracted with Trend analysis. Lastly, the final model was validated with experiments.Keywords: PMSM, optimal design, rotor design, voltage-inductance map
Procedia PDF Downloads 67325304 Turbine Engine Performance Experimental Tests of Subscale UAV
Authors: Haluk Altay, Bilal Yücel, Berkcan Ulcay, Yücel Aydın
Abstract:
In this study, the design, integration, and testing of measurement systems required for performance tests of jet engines used in small-scale unmanned aerial vehicles are described. Performance tests are carried out as thrust and fuel consumption. For thrust tests, measurements are made using a load cell. Amplifier and filter designs have been made for the load cell to measure accurately to meet the desired sensitivity. It was calibrated by making multiple measurements at different thrust levels. As a result of these processes, the cycle thrust graph was obtained. For fuel consumption tests, tests are carried out using a flow meter. Performance graphics were obtained by finding the fuel consumption for different RPM levels of the engine.Keywords: jet engine, UAV, experimental test, loadcell, thrust, fuel consumption
Procedia PDF Downloads 8025303 Nanocrystalline Na0.1V2O5.nH2Oxerogel Thin Film for Gas Sensing
Authors: M. S. Al-Assiri, M. M. El-Desoky, A. A. Bahgat
Abstract:
Nanocrystalline thin film of Na0.1V2O5.nH2O xerogel obtained by sol-gel synthesis was used as a gas sensor. Gas sensing properties of different gases such as hydrogen, petroleum and humidity were investigated. Applying XRD and TEM the size of the nanocrystals is found to be 7.5 nm. SEM shows a highly porous structure with submicron meter-sized voids present throughout the sample. FTIR measurement shows different chemical groups identifying the obtained series of gels. The sample was n-type semiconductor according to the thermoelectric power and electrical conductivity. It can be seen that the sensor response curves from 130°C to 150°C show a rapid increase in sensitivity for all types of gas injection, low response values for heating period and the rapid high response values for cooling period. This result may suggest that this material is able to act as gas sensor during the heating and cooling process.Keywords: sol-gel, thermoelectric power, XRD, TEM, gas sensing
Procedia PDF Downloads 30325302 Quantum Conductance Based Mechanical Sensors Fabricated with Closely Spaced Metallic Nanoparticle Arrays
Authors: Min Han, Di Wu, Lin Yuan, Fei Liu
Abstract:
Mechanical sensors have undergone a continuous evolution and have become an important part of many industries, ranging from manufacturing to process, chemicals, machinery, health-care, environmental monitoring, automotive, avionics, and household appliances. Concurrently, the microelectronics and microfabrication technology have provided us with the means of producing mechanical microsensors characterized by high sensitivity, small size, integrated electronics, on board calibration, and low cost. Here we report a new kind of mechanical sensors based on the quantum transport process of electrons in the closely spaced nanoparticle films covering a flexible polymer sheet. The nanoparticle films were fabricated by gas phase depositing of preformed metal nanoparticles with a controlled coverage on the electrodes. To amplify the conductance of the nanoparticle array, we fabricated silver interdigital electrodes on polyethylene terephthalate(PET) by mask evaporation deposition. The gaps of the electrodes ranged from 3 to 30μm. Metal nanoparticles were generated from a magnetron plasma gas aggregation cluster source and deposited on the interdigital electrodes. Closely spaced nanoparticle arrays with different coverage could be gained through real-time monitoring the conductance. In the film coulomb blockade and quantum, tunneling/hopping dominate the electronic conduction mechanism. The basic principle of the mechanical sensors relies on the mechanical deformation of the fabricated devices which are translated into electrical signals. Several kinds of sensing devices have been explored. As a strain sensor, the device showed a high sensitivity as well as a very wide dynamic range. A gauge factor as large as 100 or more was demonstrated, which can be at least one order of magnitude higher than that of the conventional metal foil gauges or even better than that of the semiconductor-based gauges with a workable maximum applied strain beyond 3%. And the strain sensors have a workable maximum applied strain larger than 3%. They provide the potential to be a new generation of strain sensors with performance superior to that of the currently existing strain sensors including metallic strain gauges and semiconductor strain gauges. When integrated into a pressure gauge, the devices demonstrated the ability to measure tiny pressure change as small as 20Pa near the atmospheric pressure. Quantitative vibration measurements were realized on a free-standing cantilever structure fabricated with closely-spaced nanoparticle array sensing element. What is more, the mechanical sensor elements can be easily scaled down, which is feasible for MEMS and NEMS applications.Keywords: gas phase deposition, mechanical sensors, metallic nanoparticle arrays, quantum conductance
Procedia PDF Downloads 27425301 Performance Evaluation of Contemporary Classifiers for Automatic Detection of Epileptic EEG
Authors: K. E. Ch. Vidyasagar, M. Moghavvemi, T. S. S. T. Prabhat
Abstract:
Epilepsy is a global problem, and with seizures eluding even the smartest of diagnoses a requirement for automatic detection of the same using electroencephalogram (EEG) would have a huge impact in diagnosis of the disorder. Among a multitude of methods for automatic epilepsy detection, one should find the best method out, based on accuracy, for classification. This paper reasons out, and rationalizes, the best methods for classification. Accuracy is based on the classifier, and thus this paper discusses classifiers like quadratic discriminant analysis (QDA), classification and regression tree (CART), support vector machine (SVM), naive Bayes classifier (NBC), linear discriminant analysis (LDA), K-nearest neighbor (KNN) and artificial neural networks (ANN). Results show that ANN is the most accurate of all the above stated classifiers with 97.7% accuracy, 97.25% specificity and 98.28% sensitivity in its merit. This is followed closely by SVM with 1% variation in result. These results would certainly help researchers choose the best classifier for detection of epilepsy.Keywords: classification, seizure, KNN, SVM, LDA, ANN, epilepsy
Procedia PDF Downloads 52025300 Optimal Pricing Based on Real Estate Demand Data
Authors: Vanessa Kummer, Maik Meusel
Abstract:
Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning
Procedia PDF Downloads 28525299 Dissimilar Cu/Al Friction Stir Welding: Sensitivity of the Tool Offset
Authors: Tran Hung Tra, Hao Dinh Duong, Masakazu Okazaki
Abstract:
Copper 1100 and aluminum 1050 plates with a thickness of 5.0 mm are butt-joint using friction stir welding. The tool offset is linearly varied along the welding path. Two welding regimes, using the same linear tool offset but in opposite directions, are applied for fabricating two Cu/Al plates. The material flow is dominated by both tool offset and offset history. The intermetallic compounds layer and interface morphology in each welded plate are formed in a different manner. As a result, the bonding strength and fracture behavior between two welded plates are significantly distinct. The role of interface morphology on fracture behavior is analyzed by the finite element method.Keywords: Cu/Al dissimilar welding, offset history, interface morphology, intermetallic compounds, strength and fracture
Procedia PDF Downloads 7625298 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials
Authors: Rajesh Kumar G
Abstract:
A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.Keywords: adaptive design, simulation, borrowing data, bayesian model
Procedia PDF Downloads 7625297 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia
Authors: Carol Anne Hargreaves
Abstract:
A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system
Procedia PDF Downloads 15725296 Analyzing Test Data Generation Techniques Using Evolutionary Algorithms
Authors: Arslan Ellahi, Syed Amjad Hussain
Abstract:
Software Testing is a vital process in software development life cycle. We can attain the quality of software after passing it through software testing phase. We have tried to find out automatic test data generation techniques that are a key research area of software testing to achieve test automation that can eventually decrease testing time. In this paper, we review some of the approaches presented in the literature which use evolutionary search based algorithms like Genetic Algorithm, Particle Swarm Optimization (PSO), etc. to validate the test data generation process. We also look into the quality of test data generation which increases or decreases the efficiency of testing. We have proposed test data generation techniques for model-based testing. We have worked on tuning and fitness function of PSO algorithm.Keywords: search based, evolutionary algorithm, particle swarm optimization, genetic algorithm, test data generation
Procedia PDF Downloads 19025295 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential
Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag
Abstract:
Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.Keywords: climate, reanalysis, renewable energy, solar radiation
Procedia PDF Downloads 20925294 A Mathematical Optimization Model for Locating and Fortifying Capacitated Warehouses under Risk of Failure
Authors: Tareq Oshan
Abstract:
Facility location and size decisions are important to any company because they affect profitability and success. However, warehouses are exposed to various risks of failure that affect their activity. This paper presents a mixed-integer non-linear mathematical model that can be used to determine optimal warehouse locations and sizes, which warehouses to fortify, and which branches should be assigned to specific warehouses when there is a risk of warehouse failure. Every branch is assigned to a fortified primary warehouse or a nonfortified primary warehouse and a fortified backup warehouse. The standard method and an introduced method, based on the average probabilities, for linearizing this mathematical model were used. A Canadian case study was used to demonstrate the developed mathematical model, followed by some sensitivity analysis.Keywords: supply chain network design, fortified warehouse, mixed-integer mathematical model, warehouse failure risk
Procedia PDF Downloads 24325293 Study on Co-Relation of Prostate Specific Antigen with Metastatic Bone Disease in Prostate Cancer on Skeletal Scintigraphy
Authors: Muhammad Waleed Asfandyar, Akhtar Ahmed, Syed Adib-ul-Hasan Rizvi
Abstract:
Objective: To evaluate the ability of serum concentration of prostate specific antigen between two cutting points considering it as a predictor of skeletal metastasis on bone scintigraphy in men with prostate cancer. Settings: This study was carried out in department of Nuclear Medicine at Sindh Institute of Urology and Transplantation (SIUT) Karachi, Pakistan. Materials and Method: From August 2013 to November 2013, forty two (42) consecutive patients with prostate cancer who underwent technetium-99m methylene diphosphonate (Tc-99mMDP) whole body bone scintigraphy were prospectively analyzed. The information was collected from the scintigraphic database at a Nuclear medicine department Sindh institute of urology and transplantation Karachi Pakistan. Patients who did not have a serum PSA concentration available within 1 month before or after the time of performing the Tc-99m MDP whole body bone scintigraphy were excluded from this study. A whole body bone scintigraphy scan (from the toes to top of the head) was performed using a whole-body Moving gamma camera technique (anterior and posterior) 2–4 hours after intravenous injection of 20 mCi of Tc-99m MDP. In addition, all patients necessarily have a pathological report available. Bony metastases were determined from the bone scan studies and no further correlation with histopathology or other imaging modalities were performed. To preserve patient confidentiality, direct patient identifiers were not collected. In all the patients, Prostate specific antigen values and skeletal scintigraphy were evaluated. Results: The mean age, mean PSA, and incidence of bone metastasis on bone scintigraphy were 68.35 years, 370.51 ng/mL and 19/42 (45.23%) respectively. According to PSA levels, patients were divided into 5 groups < 10ng/mL (10/42), 10-20 ng/mL (5/42), 20-50 ng/mL (2/42), 50-100 (3/42), 100- 500ng/mL (3/42) and more than 500ng/mL (0/42) presenting negative bone scan. The incidence of positive bone scan (%) for bone metastasis for each group were O1 patient (5.26%), 0%, 03 patients (15.78%), 01 patient (5.26%), 04 patients (21.05%), and 10 patients (52.63%) respectively. From the 42 patients 19 (45.23%) presented positive scintigraphic examination for the presence of bone metastasis. 1 patient presented bone metastasis on bone scintigraphy having PSA level less than 10ng/mL, and in only 1 patient (5.26%) with bone metastasis PSA concentration was less than 20 ng/mL. therefore, when the cutting point adopted for PSA serum concentration was 10ng/mL, a negative predictive value for bone metastasis was 95% with sensitivity rates 94.74% and the positive predictive value and specificities of the method were 56.53% and 43.48% respectively. When the cutting point of PSA serum concentration was 20ng/mL the observed results for Positive predictive value and specificity were (78.27% and 65.22% respectively) whereas negative predictive value and sensitivity stood (100% and 95%) respectively. Conclusion: Results of our study allow us to conclude that serum PSA concentration of higher than 20ng/mL was the most accurate cutting point than a serum concentration of PSA higher than 10ng/mL to predict metastasis in radionuclide bone scintigraphy. In this way, unnecessary cost can be avoided, since a considerable part of prostate adenocarcinomas present low serum PSA levels less than 20 ng/mL and for these cases radionuclide bone scintigraphy could be unnecessary.Keywords: bone scan, cut off value, prostate specific antigen value, scintigraphy
Procedia PDF Downloads 31925292 Interferometric Demodulation Scheme Using a Mode-Locker Fiber Laser
Authors: Liang Zhang, Yuanfu Lu, Yuming Dong, Guohua Jiao, Wei Chen, Jiancheng Lv
Abstract:
We demonstrated an interferometric demodulation scheme using a mode-locked fiber laser. The mode-locked fiber laser is launched into a two-beam interferometer. When the ratio between the fiber path imbalance of interferometer and the laser cavity length is close to an integer, an interferometric fringe emerges as a result of vernier effect, and then the phase shift of the interferometer can be demodulated. The mode-locked fiber laser provides a large bandwidth and reduces the cost for wavelength division multiplexion (WDM). The proposed interferometric demodulation scheme can be further applied in multi-point sensing system such as fiber optics hydrophone array, seismic wave detection network with high sensitivity and low cost.Keywords: fiber sensing, interferometric demodulation, mode-locked fiber laser, vernier effect
Procedia PDF Downloads 32925291 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 37525290 Laser Ultrasonic Diagnostics and Acoustic Emission Technique for Examination of Rock Specimens under Uniaxial Compression
Authors: Elena B. Cherepetskaya, Vladimir A. Makarov, Dmitry V. Morozov, Ivan E. Sas
Abstract:
Laboratory studies of the stress-strain behavior of rocks specimens were conducted by using acoustic emission and laser-ultrasonic diagnostics. The sensitivity of the techniques allowed changes in the internal structure of the specimens under uniaxial compressive load to be examined at micro- and macro scales. It was shown that microcracks appear in geologic materials when the stress level reaches about 50% of breaking strength. Also, the characteristic stress of the main crack formation was registered in the process of single-stage compression of rocks. On the base of laser-ultrasonic echoscopy, 2D visualization of the internal structure of rocky soil specimens was realized, and the microcracks arising during uniaxial compression were registered.Keywords: acoustic emission, geomaterial, laser ultrasound, uniaxial compression
Procedia PDF Downloads 37525289 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 12425288 Analysis and Prediction of Netflix Viewing History Using Netflixlatte as an Enriched Real Data Pool
Authors: Amir Mabhout, Toktam Ghafarian, Amirhossein Farzin, Zahra Makki, Sajjad Alizadeh, Amirhossein Ghavi
Abstract:
The high number of Netflix subscribers makes it attractive for data scientists to extract valuable knowledge from the viewers' behavioural analyses. This paper presents a set of statistical insights into viewers' viewing history. After that, a deep learning model is used to predict the future watching behaviour of the users based on previous watching history within the Netflixlatte data pool. Netflixlatte in an aggregated and anonymized data pool of 320 Netflix viewers with a length 250 000 data points recorded between 2008-2022. We observe insightful correlations between the distribution of viewing time and the COVID-19 pandemic outbreak. The presented deep learning model predicts future movie and TV series viewing habits with an average loss of 0.175.Keywords: data analysis, deep learning, LSTM neural network, netflix
Procedia PDF Downloads 25025287 Analysis of User Data Usage Trends on Cellular and Wi-Fi Networks
Authors: Jayesh M. Patel, Bharat P. Modi
Abstract:
The availability of on mobile devices that can invoke the demonstrated that the total data demand from users is far higher than previously articulated by measurements based solely on a cellular-centric view of smart-phone usage. The ratio of Wi-Fi to cellular traffic varies significantly between countries, This paper is shown the compression between the cellular data usage and Wi-Fi data usage by the user. This strategy helps operators to understand the growing importance and application of yield management strategies designed to squeeze maximum returns from their investments into the networks and devices that enable the mobile data ecosystem. The transition from unlimited data plans towards tiered pricing and, in the future, towards more value-centric pricing offers significant revenue upside potential for mobile operators, but, without a complete insight into all aspects of smartphone customer behavior, operators will unlikely be able to capture the maximum return from this billion-dollar market opportunity.Keywords: cellular, Wi-Fi, mobile, smart phone
Procedia PDF Downloads 36525286 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 31125285 Data Driven Infrastructure Planning for Offshore Wind farms
Authors: Isha Saxena, Behzad Kazemtabrizi, Matthias C. M. Troffaes, Christopher Crabtree
Abstract:
The calculations done at the beginning of the life of a wind farm are rarely reliable, which makes it important to conduct research and study the failure and repair rates of the wind turbines under various conditions. This miscalculation happens because the current models make a simplifying assumption that the failure/repair rate remains constant over time. This means that the reliability function is exponential in nature. This research aims to create a more accurate model using sensory data and a data-driven approach. The data cleaning and data processing is done by comparing the Power Curve data of the wind turbines with SCADA data. This is then converted to times to repair and times to failure timeseries data. Several different mathematical functions are fitted to the times to failure and times to repair data of the wind turbine components using Maximum Likelihood Estimation and the Posterior expectation method for Bayesian Parameter Estimation. Initial results indicate that two parameter Weibull function and exponential function produce almost identical results. Further analysis is being done using the complex system analysis considering the failures of each electrical and mechanical component of the wind turbine. The aim of this project is to perform a more accurate reliability analysis that can be helpful for the engineers to schedule maintenance and repairs to decrease the downtime of the turbine.Keywords: reliability, bayesian parameter inference, maximum likelihood estimation, weibull function, SCADA data
Procedia PDF Downloads 8625284 Multimodal Ophthalmologic Evaluation Can Detect Retinal Injuries in Asymptomatic Patients With Primary Antiphospholipid Syndrome
Authors: Taurino S. R. Neto, Epitácio D. S. Neto, Flávio Signorelli, Gustavo G. M. Balbi, Alex H. Higashi, Mário Luiz R. Monteiro, Eloisa Bonfá, Danieli C. O. Andrade, Leandro C. Zacharias
Abstract:
Purpose: To perform a multimodal evaluation, including the use of Optical Coherence Angiotomography (OCTA), in patients with primary antiphospholipid syndrome (PAPS) without ocular complaints and to compare them with healthy individuals. Methods: A complete structural and functional ophthalmological evaluation using OCTA and microperimetry (MP) exam in patients with PAPS, followed at a tertiary rheumatology outpatient clinic, was performed. All ophthalmologic manifestations were recorded and then statistical analysis was performed for comparative purposes; p <0.05 was considered statistically significant. Results: 104 eyes of 52 subjects (26 patients with PAPS without ocular complaints and 26 healthy individuals) were included. Among PAPS patients, 21 were female (80.8%) and 21 (80.8%) were Caucasians. Thrombotic PAPS was the main clinical criteria manifestation (100%); 65.4% had venous and 34.6% had arterial thrombosis. Obstetrical criteria were present in 34.6% of all thrombotic PAPS patients. Lupus anticoagulant was present in all patients. 19.2% of PAPS patients presented ophthalmologic findings against none of the healthy individuals. The most common retinal change was paracentral acute middle maculopathy (PAMM) (3 patients, 5 eyes), followed by drusen-like deposits (1 patient, 2 eyes) and pachychoroid pigment epitheliopathy (1 patient, 1 eye). Systemic hypertension and hyperlipidaemia were present in 100% of the PAPS patients with PAMM, while only six patients (26.1%) with PAPS without PAMM presented these two risk factors together. In the quantitative OCTA evaluation, we found significant differences between PAPS patients and controls in both the superficial vascular complex (SVC) and deep vascular complex (DVC) in the high-speed protocol, as well as in the SVC in the high-resolution protocol. In the analysis of the foveal avascular zone (FAZ) parameters, the PAPS group had a larger area of FAZ in the DVC using the high-speed method compared to the control group (p=0.047). In the quantitative analysis of the MP, the PAPS group had lower central (p=0.041) and global (p<0.001) retinal sensitivity compared to the control group, as well as in the sector analysis, with the exception of the inferior sector. In the quantitative evaluation of fixation stability, there was a trend towards worse stability in the PAPS subgroup with PAMM in both studied methods. Conclusions: PAMM was observed in 11.5% of PAPS patients with no previous ocular complaints. Systemic hypertension concomitant with hyperlipidemia was the most commonly associated risk factor for PAMM in patients with PAPS. PAPS patients present lower vascular density and retinal sensitivity compared to the control group, even in patients without PAMM.Keywords: antiphospholipid syndrome, optical coherence angio tomography, optical coherence tomography, retina
Procedia PDF Downloads 8025283 Empirical Acceleration Functions and Fuzzy Information
Authors: Muhammad Shafiq
Abstract:
In accelerated life testing approaches life time data is obtained under various conditions which are considered more severe than usual condition. Classical techniques are based on obtained precise measurements, and used to model variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. Analysis techniques, which do not consider fuzziness and are only based on precise life time observations, lead to pseudo results. This study was aimed to examine the behavior of empirical acceleration functions using fuzzy lifetimes data. The results showed an increased fuzziness in the transformed life times as compare to the input data.Keywords: acceleration function, accelerated life testing, fuzzy number, non-precise data
Procedia PDF Downloads 29825282 New Dynamic Constitutive Model for OFHC Copper Film
Authors: Jin Sung Kim, Hoon Huh
Abstract:
The material properties of OFHC copper film was investigated with the High-Speed Material Micro Testing Machine (HSMMTM) at the high strain rates. The rate-dependent stress-strain curves from the experiment and the Johnson-Cook curve fitting showed large discrepancies as the plastic strain increases since the constitutive model implies no rate-dependent strain hardening effect. A new constitutive model was proposed in consideration of rate-dependent strain hardening effect. The strain rate hardening term in the new constitutive model consists of the strain rate sensitivity coefficients of the yield strength and strain hardening.Keywords: rate dependent material properties, dynamic constitutive model, OFHC copper film, strain rate
Procedia PDF Downloads 48625281 Evaluating Alternative Structures for Prefix Trees
Authors: Feras Hanandeh, Izzat Alsmadi, Muhammad M. Kwafha
Abstract:
Prefix trees or tries are data structures that are used to store data or index of data. The goal is to be able to store and retrieve data by executing queries in quick and reliable manners. In principle, the structure of the trie depends on having letters in nodes at the different levels to point to the actual words in the leafs. However, the exact structure of the trie may vary based on several aspects. In this paper, we evaluated different structures for building tries. Using datasets of words of different sizes, we evaluated the different forms of trie structures. Results showed that some characteristics may impact significantly, positively or negatively, the size and the performance of the trie. We investigated different forms and structures for the trie. Results showed that using an array of pointers in each level to represent the different alphabet letters is the best choice.Keywords: data structures, indexing, tree structure, trie, information retrieval
Procedia PDF Downloads 452