Search results for: Hybrid Fish-Bee Algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4978

Search results for: Hybrid Fish-Bee Algorithm

1288 The Influence of Beta Shape Parameters in Project Planning

Authors: Αlexios Kotsakis, Stefanos Katsavounis, Dimitra Alexiou

Abstract:

Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.

Keywords: beta distribution, PERT, Monte Carlo simulation, skewness, project completion time distribution

Procedia PDF Downloads 127
1287 Robotic Mini Gastric Bypass Surgery

Authors: Arun Prasad, Abhishek Tiwari, Rekha Jaiswal, Vivek Chaudhary

Abstract:

Background: Robotic Roux en Y gastric bypass is being done for some time but is technically difficult, requiring operating in both the sub diaphragmatic and infracolic compartments of the abdomen. This can mean a dual docking of the robot or a hybrid partial laparoscopic and partial robotic surgery. The Mini /One anastomosis /omega loop gastric bypass (MGB) has the advantage of having all dissection and anastomosis in the supracolic compartment and is therefore suitable technically for robotic surgery. Methods: We have done 208 robotic mini gastric bypass surgeries. The robot is docked above the head of the patient in the midline. Camera port is placed supra umbilically. Two ports are placed on the left side of the patient and one port on the right side of the patient. An assistant port is placed between the camera port and right sided robotic port for use of stapler. Distal stomach is stapled from the lesser curve followed by a vertical sleeve upwards leading to a long sleeve pouch. Jejunum is taken at 200 cm from the duodenojejunal junction and brought up to do a side to side gastrojejunostomy. Results: All patients had a successful robotic procedure. Mean time taken was 85 minutes. There were major intraoperative or post operative complications. No patient needed conversion or re-explorative surgery. Mean excess weight loss over a period of 2 year was about 75%. There was no mortality. Patient satisfaction score was high and was attributed to the good weight loss and minimal dietary modifications that were needed after the procedure. Long term side effects were anemia and bile reflux in a small number of patients. Conclusions: MGB / OAGB is gaining worldwide interest as a short simple procedure that has been shown to very effective and safe bariatric surgery. The purpose of this study was to report on the safety and efficacy of robotic surgery for this procedure. This is the first report of totally robotic mini gastric bypass.

Keywords: MGB, mini gastric bypass, OAGB, robotic bariatric surgery

Procedia PDF Downloads 272
1286 'Low Electronic Noise' Detector Technology in Computed Tomography

Authors: A. Ikhlef

Abstract:

Image noise in computed tomography, is mainly caused by the statistical noise, system noise reconstruction algorithm filters. Since last few years, low dose x-ray imaging became more and more desired and looked as a technical differentiating technology among CT manufacturers. In order to achieve this goal, several technologies and techniques are being investigated, including both hardware (integrated electronics and photon counting) and software (artificial intelligence and machine learning) based solutions. From a hardware point of view, electronic noise could indeed be a potential driver for low and ultra-low dose imaging. We demonstrated that the reduction or elimination of this term could lead to a reduction of dose without affecting image quality. Also, in this study, we will show that we can achieve this goal using conventional electronics (low cost and affordable technology), designed carefully and optimized for maximum detective quantum efficiency. We have conducted the tests using large imaging objects such as 30 cm water and 43 cm polyethylene phantoms. We compared the image quality with conventional imaging protocols with radiation as low as 10 mAs (<< 1 mGy). Clinical validation of such results has been performed as well.

Keywords: computed tomography, electronic noise, scintillation detector, x-ray detector

Procedia PDF Downloads 102
1285 Inverse Problem Method for Microwave Intrabody Medical Imaging

Authors: J. Chamorro-Servent, S. Tassani, M. A. Gonzalez-Ballester, L. J. Roca, J. Romeu, O. Camara

Abstract:

Electromagnetic and microwave imaging (MWI) have been used in medical imaging in the last years, being the most common applications of breast cancer and stroke detection or monitoring. In those applications, the subject or zone to observe is surrounded by a number of antennas, and the Nyquist criterium can be satisfied. Additionally, the space between the antennas (transmitting and receiving the electromagnetic fields) and the zone to study can be prepared in a homogeneous scenario. However, this may differ in other cases as could be intracardiac catheters, stomach monitoring devices, pelvic organ systems, liver ablation monitoring devices, or uterine fibroids’ ablation systems. In this work, we analyzed different MWI algorithms to find the most suitable method for dealing with an intrabody scenario. Due to the space limitations usually confronted on those applications, the device would have a cylindrical configuration of a maximum of eight transmitters and eight receiver antennas. This together with the positioning of the supposed device inside a body tract impose additional constraints in order to choose a reconstruction method; for instance, it inhabitants the use of well-known algorithms such as filtered backpropagation for diffraction tomography (due to the unusual configuration with probes enclosed by the imaging region). Finally, the difficulty of simulating a realistic non-homogeneous background inside the body (due to the incomplete knowledge of the dielectric properties of other tissues between the antennas’ position and the zone to observe), also prevents the use of Born and Rytov algorithms due to their limitations with a heterogeneous background. Instead, we decided to use a time-reversed algorithm (mostly used in geophysics) due to its characteristics of ignoring heterogeneities in the background medium, and of focusing its generated field onto the scatters. Therefore, a 2D time-reversed finite difference time domain was developed based on the time-reversed approach for microwave breast cancer detection. Simultaneously an in-silico testbed was also developed to compare ground-truth dielectric properties with corresponding microwave imaging reconstruction. Forward and inverse problems were computed varying: the frequency used related to a small zone to observe (7, 7.5 and 8 GHz); a small polyp diameter (5, 7 and 10 mm); two polyp positions with respect to the closest antenna (aligned or disaligned); and the (transmitters-to-receivers) antenna combination used for the reconstruction (1-1, 8-1, 8-8 or 8-3). Results indicate that when using the existent time-reversed method for breast cancer here for the different combinations of transmitters and receivers, we found false positives due to the high degrees of freedom and unusual configuration (and the possible violation of Nyquist criterium). Those false positives founded in 8-1 and 8-8 combinations, highly reduced with the 1-1 and 8-3 combination, being the 8-3 configuration de most suitable (three neighboring receivers at each time). The 8-3 configuration creates a region-of-interest reduced problem, decreasing the ill-posedness of the inverse problem. To conclude, the proposed algorithm solves the main limitations of the described intrabody application, successfully detecting the angular position of targets inside the body tract.

Keywords: FDTD, time-reversed, medical imaging, microwave imaging

Procedia PDF Downloads 102
1284 Machine Learning Approach for Yield Prediction in Semiconductor Production

Authors: Heramb Somthankar, Anujoy Chakraborty

Abstract:

This paper presents a classification study on yield prediction in semiconductor production using machine learning approaches. A complicated semiconductor production process is generally monitored continuously by signals acquired from sensors and measurement sites. A monitoring system contains a variety of signals, all of which contain useful information, irrelevant information, and noise. In the case of each signal being considered a feature, "Feature Selection" is used to find the most relevant signals. The open-source UCI SECOM Dataset provides 1567 such samples, out of which 104 fail in quality assurance. Feature extraction and selection are performed on the dataset, and useful signals were considered for further study. Afterward, common machine learning algorithms were employed to predict whether the signal yields pass or fail. The most relevant algorithm is selected for prediction based on the accuracy and loss of the ML model.

Keywords: deep learning, feature extraction, feature selection, machine learning classification algorithms, semiconductor production monitoring, signal processing, time-series analysis

Procedia PDF Downloads 89
1283 Local Texture and Global Color Descriptors for Content Based Image Retrieval

Authors: Tajinder Kaur, Anu Bala

Abstract:

An image retrieval system is a computer system for browsing, searching, and retrieving images from a large database of digital images a new algorithm meant for content-based image retrieval (CBIR) is presented in this paper. The proposed method combines the color and texture features which are extracted the global and local information of the image. The local texture feature is extracted by using local binary patterns (LBP), which are evaluated by taking into consideration of local difference between the center pixel and its neighbors. For the global color feature, the color histogram (CH) is used which is calculated by RGB (red, green, and blue) spaces separately. In this paper, the combination of color and texture features are proposed for content-based image retrieval. The performance of the proposed method is tested on Corel 1000 database which is the natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP and CH.

Keywords: color, texture, feature extraction, local binary patterns, image retrieval

Procedia PDF Downloads 336
1282 Preparation of Novel Silicone/Graphene-based Nanostructured Surfaces as Fouling Release Coatings

Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Zhifeng Hao, Ping Jing Mo

Abstract:

As marine fouling-release (FR) surfaces, two new superhydrophobic nanocomposite series of polydimethylsiloxane (PDMS) loaded with reduced graphene oxide (RGO) and graphene oxide/boehmite nanorods (GO-γ-AlOOH) nanofillers were created. The self-cleaning and antifouling capabilities were modified by controlling the nanofillers' shapes and distribution in the silicone matrix. With an average diameter of 10-20 nm and a length of 200 nm, γ-AlOOH nanorods showed a single crystallinity. RGO was made using a hydrothermal process, whereas GO-γ-AlOOH nanocomposites were made using a chemical deposition method for use as fouling-release coating materials. These nanofillers were disseminated in the silicone matrix using the solution casting method to explore the synergetic effects of graphene-based materials on the surface, mechanical, and FR characteristics. Water contact angle (WCA), scanning electron, and atomic force microscopes were used to investigate the surface's hydrophobicity and antifouling capabilities (SEM and AFM). The roughness, superhydrophobicity, and surface mechanical characteristics of coatings all increased the homogeneity of the nanocomposite dispersion. To examine the antifouling effects of the coating systems, laboratory tests were conducted for 30 days using specified bacteria.PDMS/GO-γ-AlOOH nanorod composite demonstrated superior antibacterial efficacy against several bacterial strains than PDMS/RGO nanocomposite. The high surface area and stabilizing effects of the GO-γ-AlOOH hybrid nanofillers are to blame for this. The biodegradability percentage of the PDMS/GO-γ-AlOOH nanorod composite (3 wt.%) was the lowest (1.6%), while the microbial endurability percentages for gram-positive, gram-negative, and fungi were 86.42%, 97.94%, and 85.97%, respectively. The homogeneity of the GO-γ-AlOOH (3 wt.%) dispersion, which had a WCA of 151° and a rough surface, was the most profound superhydrophobic antifouling nanostructured coating.

Keywords: superhydrophobic nanocomposite, fouling release, nanofillers, surface coating

Procedia PDF Downloads 214
1281 Blind Super-Resolution Reconstruction Based on PSF Estimation

Authors: Osama A. Omer, Amal Hamed

Abstract:

Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently.

Keywords: blind, PSF, super-resolution, knife-edge, blurring, bilateral, L1 norm

Procedia PDF Downloads 344
1280 A Flexible Real-Time Eco-Drive Strategy for Electric Minibus

Authors: Felice De Luca, Vincenzo Galdi, Piera Stella, Vito Calderaro, Adriano Campagna, Antonio Piccolo

Abstract:

Sustainable mobility has become one of the major issues of recent years. The challenge in reducing polluting emissions as much as possible has led to the production and diffusion of vehicles with internal combustion engines that are less polluting and to the adoption of green energy vectors, such as vehicles powered by natural gas or LPG and, more recently, with hybrid and electric ones. While on the one hand, the spread of electric vehicles for private use is becoming a reality, albeit rather slowly, not the same is happening for vehicles used for public transport, especially those that operate in the congested areas of the cities. Even if the first electric buses are increasingly being offered on the market, it remains central to the problem of autonomy for battery fed vehicles with high daily routes and little time available for recharging. In fact, at present, solid-state batteries are still too large in size, heavy, and unable to guarantee the required autonomy. Therefore, in order to maximize the energy management on the vehicle, the optimization of driving profiles offer a faster and cheaper contribution to improve vehicle autonomy. In this paper, following the authors’ precedent works on electric vehicles in public transport and energy management strategies in the electric mobility area, an eco-driving strategy for electric bus is presented and validated. Particularly, the characteristics of the prototype bus are described, and a general-purpose eco-drive methodology is briefly presented. The model is firstly simulated in MATLAB™ and then implemented on a mobile device installed on-board of a prototype bus developed by the authors in a previous research project. The solution implemented furnishes the bus-driver suggestions on the guide style to adopt. The result of the test in a real case will be shown to highlight the effectiveness of the solution proposed in terms of energy saving.

Keywords: eco-drive, electric bus, energy management, prototype

Procedia PDF Downloads 112
1279 Spatial-Temporal Awareness Approach for Extensive Re-Identification

Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush

Abstract:

Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.

Keywords: long-short-term memory, re-identification, security critical application, spatial-temporal awareness

Procedia PDF Downloads 93
1278 Optimal Bayesian Chart for Controlling Expected Number of Defects in Production Processes

Authors: V. Makis, L. Jafari

Abstract:

In this paper, we develop an optimal Bayesian chart to control the expected number of defects per inspection unit in production processes with long production runs. We formulate this control problem in the optimal stopping framework. The objective is to determine the optimal stopping rule minimizing the long-run expected average cost per unit time considering partial information obtained from the process sampling at regular epochs. We prove the optimality of the control limit policy, i.e., the process is stopped and the search for assignable causes is initiated when the posterior probability that the process is out of control exceeds a control limit. An algorithm in the semi-Markov decision process framework is developed to calculate the optimal control limit and the corresponding average cost. Numerical examples are presented to illustrate the developed optimal control chart and to compare it with the traditional u-chart.

Keywords: Bayesian u-chart, economic design, optimal stopping, semi-Markov decision process, statistical process control

Procedia PDF Downloads 546
1277 Oil Reservoir Asphalting Precipitation Estimating during CO2 Injection

Authors: I. Alhajri, G. Zahedi, R. Alazmi, A. Akbari

Abstract:

In this paper, an Artificial Neural Network (ANN) was developed to predict Asphaltene Precipitation (AP) during the injection of carbon dioxide into crude oil reservoirs. In this study, the experimental data from six different oil fields were collected. Seventy percent of the data was used to develop the ANN model, and different ANN architectures were examined. A network with the Trainlm training algorithm was found to be the best network to estimate the AP. To check the validity of the proposed model, the model was used to predict the AP for the thirty percent of the data that was unevaluated. The Mean Square Error (MSE) of the prediction was 0.0018, which confirms the excellent prediction capability of the proposed model. In the second part of this study, the ANN model predictions were compared with modified Hirschberg model predictions. The ANN was found to provide more accurate estimates compared to the modified Hirschberg model. Finally, the proposed model was employed to examine the effect of different operating parameters during gas injection on the AP. It was found that the AP is mostly sensitive to the reservoir temperature. Furthermore, the carbon dioxide concentration in liquid phase increases the AP.

Keywords: artificial neural network, asphaltene, CO2 injection, Hirschberg model, oil reservoirs

Procedia PDF Downloads 351
1276 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization

Authors: Yihao Kuang, Bowen Ding

Abstract:

With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graphs and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improved strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain a better and more efficient inference effect by introducing PPO into knowledge inference technology.

Keywords: reinforcement learning, PPO, knowledge inference

Procedia PDF Downloads 212
1275 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum

Authors: Abdulrahman Sumayli, Saad M. AlShahrani

Abstract:

For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectively

Keywords: temperature, pressure variations, machine learning, oil treatment

Procedia PDF Downloads 49
1274 Solving Dimensionality Problem and Finding Statistical Constructs on Latent Regression Models: A Novel Methodology with Real Data Application

Authors: Sergio Paez Moncaleano, Alvaro Mauricio Montenegro

Abstract:

This paper presents a novel statistical methodology for measuring and founding constructs in Latent Regression Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations on Item Response Theory (IRT). In addition, based on the fundamentals of submodel theory and with a convergence of many ideas of IRT, we propose an algorithm not just to solve the dimensionality problem (nowadays an open discussion) but a new research field that promises more fear and realistic qualifications for examiners and a revolution on IRT and educational research. In the end, the methodology is applied to a set of real data set presenting impressive results for the coherence, speed and precision. Acknowledgments: This research was financed by Colciencias through the project: 'Multidimensional Item Response Theory Models for Practical Application in Large Test Designed to Measure Multiple Constructs' and both authors belong to SICS Research Group from Universidad Nacional de Colombia.

Keywords: item response theory, dimensionality, submodel theory, factorial analysis

Procedia PDF Downloads 345
1273 Aerosol - Cloud Interaction with Summer Precipitation over Major Cities in Eritrea

Authors: Samuel Abraham Berhane, Lingbing Bu

Abstract:

This paper presents the spatiotemporal variability of aerosols, clouds, and precipitation within the major cities in Eritrea and it investigates the relationship between aerosols, clouds, and precipitation concerning the presence of aerosols over the study region. In Eritrea, inadequate water supplies will have both direct and indirect adverse impacts on sustainable development in areas such as health, agriculture, energy, communication, and transport. Besides, there exists a gap in the knowledge on suitable and potential areas for cloud seeding. Further, the inadequate understanding of aerosol-cloud-precipitation (ACP) interactions limits the success of weather modification aimed at improving freshwater sources, storage, and recycling. Spatiotemporal variability of aerosols, clouds, and precipitation involve spatial and time series analysis based on trend and anomaly analysis. To find the relationship between aerosols and clouds, a correlation coefficient is used. The spatiotemporal analysis showed larger variations of aerosols within the last two decades, especially in Assab, indicating that aerosol optical depth (AOD) has increased over the surrounding Red Sea region. Rainfall was significantly low but AOD was significantly high during the 2011 monsoon season. Precipitation was high during 2007 over most parts of Eritrea. The correlation coefficient between AOD and rainfall was negative over Asmara and Nakfa. Cloud effective radius (CER) and cloud optical thickness (COT) exhibited a negative correlation with AOD over Nakfa within the June–July–August (JJA) season. The hybrid single-particle Lagrangian integrated trajectory (HYSPLIT) model that is used to find the path and origin of the air mass of the study region showed that the majority of aerosols made their way to the study region via the westerly and the southwesterly winds.

Keywords: aerosol-cloud-precipitation, aerosol optical depth, cloud effective radius, cloud optical thickness, HYSPLIT

Procedia PDF Downloads 112
1272 Representativity Based Wasserstein Active Regression

Authors: Benjamin Bobbia, Matthias Picard

Abstract:

In recent years active learning methodologies based on the representativity of the data seems more promising to limit overfitting. The presented query methodology for regression using the Wasserstein distance measuring the representativity of our labelled dataset compared to the global distribution. In this work a crucial use of GroupSort Neural Networks is made therewith to draw a double advantage. The Wasserstein distance can be exactly expressed in terms of such neural networks. Moreover, one can provide explicit bounds for their size and depth together with rates of convergence. However, heterogeneity of the dataset is also considered by weighting the Wasserstein distance with the error of approximation at the previous step of active learning. Such an approach leads to a reduction of overfitting and high prediction performance after few steps of query. After having detailed the methodology and algorithm, an empirical study is presented in order to investigate the range of our hyperparameters. The performances of this method are compared, in terms of numbers of query needed, with other classical and recent query methods on several UCI datasets.

Keywords: active learning, Lipschitz regularization, neural networks, optimal transport, regression

Procedia PDF Downloads 65
1271 Double Encrypted Data Communication Using Cryptography and Steganography

Authors: Adine Barett, Jermel Watson, Anteneh Girma, Kacem Thabet

Abstract:

In information security, secure communication of data across networks has always been a problem at the forefront. Transfer of information across networks is susceptible to being exploited by attackers engaging in malicious activity. In this paper, we leverage steganography and cryptography to create a layered security solution to protect the information being transmitted. The first layer of security leverages crypto- graphic techniques to scramble the information so that it cannot be deciphered even if the steganography-based layer is compromised. The second layer of security relies on steganography to disguise the encrypted in- formation so that it cannot be seen. We consider three cryptographic cipher methods in the cryptography layer, namely, Playfair cipher, Blowfish cipher, and Hills cipher. Then, the encrypted message is passed through the least significant bit (LSB) to the steganography algorithm for further encryption. Both encryption approaches are combined efficiently to help secure information in transit over a network. This multi-layered encryption is a solution that will benefit cloud platforms, social media platforms and networks that regularly transfer private information such as banks and insurance companies.

Keywords: cryptography, steganography, layered security, Cipher, encryption

Procedia PDF Downloads 60
1270 Efects of Data Corelation in a Sparse-View Compresive Sensing Based Image Reconstruction

Authors: Sajid Abas, Jon Pyo Hong, Jung-Ryun Le, Seungryong Cho

Abstract:

Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.

Keywords: computed tomography, computed laminography, compressive sending, low-dose

Procedia PDF Downloads 446
1269 Environmental Assessment of Roll-to-Roll Printed Smart Label

Authors: M. Torres, A. Moulay, M. Zhuldybina, M. Rozel, N. D. Trinh, C. Bois

Abstract:

Printed electronics are a fast-growing market as their applications cover a large range of industrial needs, their production cost is low, and the additive printing techniques consume less materials than subtractive manufacturing methods used in traditional electronics. With the growing demand for printed electronics, there are concerns about their harmful and irreversible contribution to the environment. Indeed, it is estimated that 80% of the environmental load of a product is determined by the choices made at the conception stage. Therefore, examination through a life cycle approach at the developing stage of a novel product is the best way to identify potential environmental issues and make proactive decisions. Life cycle analysis (LCA) is a comprehensive scientific method to assess the environmental impacts of a product in its different stages of life: extraction of raw materials, manufacture and distribution, use, and end-of-life. Impacts and major hotspots are identified and evaluated through a broad range of environmental impact categories of the ReCiPe (H) middle point method. At the conception stage, the LCA is a tool that provides an environmental point of view on the choice of materials and processes and weights-in on the balance between performance materials and eco-friendly materials. Using the life cycle approach, the current work aims to provide a cradle-to-grave life cycle assessment of a roll-to-roll hybrid printed smart label designed for the food cold chain. Furthermore, this presentation will present the environmental impact of metallic conductive inks, a comparison with promising conductive polymers, evaluation of energy vs. performance of industrial printing processes, a full assessment of the impact from the smart label applied on a cellulosic-based substrate during the recycling process and the possible recovery of precious metals and rare earth elements.

Keywords: Eco-design, label, life cycle assessment, printed electronics

Procedia PDF Downloads 140
1268 Component Based Testing Using Clustering and Support Vector Machine

Authors: Iqbaldeep Kaur, Amarjeet Kaur

Abstract:

Software Reusability is important part of software development. So component based software development in case of software testing has gained a lot of practical importance in the field of software engineering from academic researcher and also from software development industry perspective. Finding test cases for efficient reuse of test cases is one of the important problems aimed by researcher. Clustering reduce the search space, reuse test cases by grouping similar entities according to requirements ensuring reduced time complexity as it reduce the search time for retrieval the test cases. In this research paper we proposed approach for re-usability of test cases by unsupervised approach. In unsupervised learning we proposed k-mean and Support Vector Machine. We have designed the algorithm for requirement and test case document clustering according to its tf-idf vector space and the output is set of highly cohesive pattern groups.

Keywords: software testing, reusability, clustering, k-mean, SVM

Procedia PDF Downloads 405
1267 Constructing White-Box Implementations Based on Threshold Shares and Composite Fields

Authors: Tingting Lin, Manfred von Willich, Dafu Lou, Phil Eisen

Abstract:

A white-box implementation of a cryptographic algorithm is a software implementation intended to resist extraction of the secret key by an adversary. To date, most of the white-box techniques are used to protect block cipher implementations. However, a large proportion of the white-box implementations are proven to be vulnerable to affine equivalence attacks and other algebraic attacks, as well as differential computation analysis (DCA). In this paper, we identify a class of block ciphers for which we propose a method of constructing white-box implementations. Our method is based on threshold implementations and operations in composite fields. The resulting implementations consist of lookup tables and few exclusive OR operations. All intermediate values (inputs and outputs of the lookup tables) are masked. The threshold implementation makes the distribution of the masked values uniform and independent of the original inputs, and the operations in composite fields reduce the size of the lookup tables. The white-box implementations can provide resistance against algebraic attacks and DCA-like attacks.

Keywords: white-box, block cipher, composite field, threshold implementation

Procedia PDF Downloads 147
1266 Investigation of Doping of CdSe QDs in Organic Semiconductor for Solar Cell Applications

Authors: Ganesh R. Bhand, N. B. Chaure

Abstract:

Cadmium selenide (CdSe) quantum dots (QDs) were prepared by solvothermal route. Subsequently a inorganic QDs-organic semiconductor (copper phthalocyanine) nanocomposite (i.e CuPc:CdSe nanocomposites) were produced by different concentration of QDs varied in CuPc. The nanocomposite thin films have been prepared by means of spin coating technique. The optical, structural and morphological properties of nanocomposite films have been investigated. The transmission electron microscopy (TEM) confirmed the formation of QDs having average size of  4 nm. The X-ray diffraction pattern exhibits cubic crystal structure of CdSe with reflection to (111), (220) and (311) at 25.4ᵒ, 42.2ᵒ and 49.6ᵒ respectively. The additional peak observed at lower angle at 6.9ᵒ in nanocomposite thin films are associated to CuPc. The field emission scanning electron microscopy (FESEM) observed that surface morphology varied in increasing concentration of CdSe QDs. The obtained nanocomposite show significant improvement in the thermal stability as compared to the pure CuPc indicated by thermo-gravimetric analysis (TGA) in thermograph. The effect in the Raman spectra of composites samples gives a confirm evidence of homogenous dispersion of CdSe in the CuPc matrix and their strong interaction between them to promotes charge transfer property. The success of reaction between composite was confirmed by Fourier transform infrared spectroscopy (FTIR). The photo physical properties were studied using UV - visible spectroscopy. The enhancement of the optical absorption in visible region for nanocomposite layer was observed with increasing the concentration of CdSe in CuPc. This composite may obtain the maximized interface between QDs and polymer for efficient charge separation and enhance the charge transport. Such nanocomposite films for potential application in fabrication of hybrid solar cell with improved power conversion efficiency.

Keywords: CdSe QDs, cupper phthalocyanine, FTIR, optical absorption

Procedia PDF Downloads 178
1265 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection

Authors: Ashkan Zakaryazad, Ekrem Duman

Abstract:

A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.

Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent

Procedia PDF Downloads 450
1264 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition

Authors: L. Hamsaveni, Navya Prakash, Suresha

Abstract:

Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.

Keywords: grayscale image format, image fusing, RGB image format, SURF detection, YCbCr image format

Procedia PDF Downloads 356
1263 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography

Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway

Abstract:

This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.

Keywords: steganography, stego, LSB, crop

Procedia PDF Downloads 250
1262 Syllogistic Reasoning with 108 Inference Rules While Case Quantities Change

Authors: Mikhail Zarechnev, Bora I. Kumova

Abstract:

A syllogism is a deductive inference scheme used to derive a conclusion from a set of premises. In a categorical syllogisms, there are only two premises and every premise and conclusion is given in form of a quantified relationship between two objects. The different order of objects in premises give classification known as figures. We have shown that the ordered combinations of 3 generalized quantifiers with certain figure provide in total of 108 syllogistic moods which can be considered as different inference rules. The classical syllogistic system allows to model human thought and reasoning with syllogistic structures always attracted the attention of cognitive scientists. Since automated reasoning is considered as part of learning subsystem of AI agents, syllogistic system can be applied for this approach. Another application of syllogistic system is related to inference mechanisms on the Semantic Web applications. In this paper we proposed the mathematical model and algorithm for syllogistic reasoning. Also the model of iterative syllogistic reasoning in case of continuous flows of incoming data based on case–based reasoning and possible applications of proposed system were discussed.

Keywords: categorical syllogism, case-based reasoning, cognitive architecture, inference on the semantic web, syllogistic reasoning

Procedia PDF Downloads 394
1261 Low Density Parity Check Codes

Authors: Kassoul Ilyes

Abstract:

The field of error correcting codes has been revolutionized by the introduction of iteratively decoded codes. Among these, LDPC codes are now a preferred solution thanks to their remarkable performance and low complexity. The binary version of LDPC codes showed even better performance, although it’s decoding introduced greater complexity. This thesis studies the performance of binary LDPC codes using simplified weighted decisions. Information is transported between a transmitter and a receiver by digital transmission systems, either by propagating over a radio channel or also by using a transmission medium such as the transmission line. The purpose of the transmission system is then to carry the information from the transmitter to the receiver as reliably as possible. These codes have not generated enough interest within the coding theory community. This forgetfulness will last until the introduction of Turbo-codes and the iterative principle. Then it was proposed to adopt Pearl's Belief Propagation (BP) algorithm for decoding these codes. Subsequently, Luby introduced irregular LDPC codes characterized by a parity check matrix. And finally, we study simplifications on binary LDPC codes. Thus, we propose a method to make the exact calculation of the APP simpler. This method leads to simplifying the implementation of the system.

Keywords: LDPC, parity check matrix, 5G, BER, SNR

Procedia PDF Downloads 133
1260 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform

Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung

Abstract:

Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.

Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing

Procedia PDF Downloads 204
1259 A Scalable Model of Fair Socioeconomic Relations Based on Blockchain and Machine Learning Algorithms-1: On Hyperinteraction and Intuition

Authors: Merey M. Sarsengeldin, Alexandr S. Kolokhmatov, Galiya Seidaliyeva, Alexandr Ozerov, Sanim T. Imatayeva

Abstract:

This series of interdisciplinary studies is an attempt to investigate and develop a scalable model of fair socioeconomic relations on the base of blockchain using positive psychology techniques and Machine Learning algorithms for data analytics. In this particular study, we use hyperinteraction approach and intuition to investigate their influence on 'wisdom of crowds' via created mobile application which was created for the purpose of this research. Along with the public blockchain and private Decentralized Autonomous Organization (DAO) which were elaborated by us on the base of Ethereum blockchain, a model of fair financial relations of members of DAO was developed. We developed a smart contract, so-called, Fair Price Protocol and use it for implementation of model. The data obtained from mobile application was analyzed by ML algorithms. A model was tested on football matches.

Keywords: blockchain, Naïve Bayes algorithm, hyperinteraction, intuition, wisdom of crowd, decentralized autonomous organization

Procedia PDF Downloads 152