Search results for: network optimization methods
18707 Unsupervised Neural Architecture for Saliency Detection
Authors: Natalia Efremova, Sergey Tarasenko
Abstract:
We propose a novel neural network architecture for visual saliency detections, which utilizes neuro physiologically plausible mechanisms for extraction of salient regions. The model has been significantly inspired by recent findings from neuro physiology and aimed to simulate the bottom-up processes of human selective attention. Two types of features were analyzed: color and direction of maximum variance. The mechanism we employ for processing those features is PCA, implemented by means of normalized Hebbian learning and the waves of spikes. To evaluate performance of our model we have conducted psychological experiment. Comparison of simulation results with those of experiment indicates good performance of our model.Keywords: neural network models, visual saliency detection, normalized Hebbian learning, Oja's rule, psychological experiment
Procedia PDF Downloads 34818706 Modified Bat Algorithm for Economic Load Dispatch Problem
Authors: Daljinder Singh, J.S.Dhillon, Balraj Singh
Abstract:
According to no free lunch theorem, a single search technique cannot perform best in all conditions. Optimization method can be attractive choice to solve optimization problem that may have exclusive advantages like robust and reliable performance, global search capability, little information requirement, ease of implementation, parallelism, no requirement of differentiable and continuous objective function. In order to synergize between exploration and exploitation and to further enhance the performance of Bat algorithm, the paper proposed a modified bat algorithm that adds additional search procedure based on bat’s previous experience. The proposed algorithm is used for solving the economic load dispatch (ELD) problem. The practical constraint such valve-point loading along with power balance constraints and generator limit are undertaken. To take care of power demand constraint variable elimination method is exploited. The proposed algorithm is tested on various ELD problems. The results obtained show that the proposed algorithm is capable of performing better in majority of ELD problems considered and is at par with existing algorithms for some of problems.Keywords: bat algorithm, economic load dispatch, penalty method, variable elimination method
Procedia PDF Downloads 45718705 Network and Sentiment Analysis of U.S. Congressional Tweets
Authors: Chaitanya Kanakamedala, Hansa Pradhan, Carter Gilbert
Abstract:
Social media platforms, such as Twitter, are excellent datasets for understanding human interactions and sentiments. This report explores social dynamics among US Congressional members through a network analysis applied to a dataset of tweets spanning 2008 to 2017 from the ’US Congressional Tweets Dataset’. In this report, we preform network analysis where connections between users (edges) are established based on a similarity threshold: two tweets are connected if the tweets they post are similar. By utilizing the Natural Language Toolkit (NLTK) and NetworkX, we quantified tweet similarity and constructed a graph comprising various interconnected components. Each component represents a cluster of users with closely aligned content. We then preform sentiment analysis on each cluster to explore the prevalent emotions and opinions within these groups. Our findings reveal that despite the initial expectation of distinct ideological divisions typically aligning with party lines, the analysis exposed a high degree of topical convergence across tweets from different political affiliations. The analysis preformed in this report not only highlights the potential of social media as a tool for political communication but also suggests a complex layer of interaction that transcends traditional partisan boundaries, reflecting a complicated landscape of politics in the digital age.Keywords: natural language processing, sentiment analysis, centrality analysis, topic modeling
Procedia PDF Downloads 3218704 Parameter Selection for Computationally Efficient Use of the Bfvrns Fully Homomorphic Encryption Scheme
Authors: Cavidan Yakupoglu, Kurt Rohloff
Abstract:
In this study, we aim to provide a novel parameter selection model for the BFVrns scheme, which is one of the prominent FHE schemes. Parameter selection in lattice-based FHE schemes is a practical challenges for experts or non-experts. Towards a solution to this problem, we introduce a hybrid principles-based approach that combines theoretical with experimental analyses. To begin, we use regression analysis to examine the parameters on the performance and security. The fact that the FHE parameters induce different behaviors on performance, security and Ciphertext Expansion Factor (CEF) that makes the process of parameter selection more challenging. To address this issue, We use a multi-objective optimization algorithm to select the optimum parameter set for performance, CEF and security at the same time. As a result of this optimization, we get an improved parameter set for better performance at a given security level by ensuring correctness and security against lattice attacks by providing at least 128-bit security. Our result enables average ~ 5x smaller CEF and mostly better performance in comparison to the parameter sets given in [1]. This approach can be considered a semiautomated parameter selection. These studies are conducted using the PALISADE homomorphic encryption library, which is a well-known HE library. The abstract goes here.Keywords: lattice cryptography, fully homomorphic encryption, parameter selection, LWE, RLWE
Procedia PDF Downloads 15218703 Exploration of an Environmentally Friendly Form of City Development Combined with a River: An Example of a Four-Dimensional Analysis Based on the Expansion of the City of Jinan across the Yellow River
Authors: Zhaocheng Shang
Abstract:
In order to study the topic of cities crossing rivers, a Four-Dimensional Analysis Method consisting of timeline, X-axis, Y-axis, and Z-axis is proposed. Policies, plans, and their implications are summarized and researched along with the timeline. The X-axis is the direction which is parallel to the river. The research area was chosen because of its important connection function. It is proposed that more surface water network should be built because of the ecological orientation of the research area. And the analysis of groundwater makes it for sure that the proposal is feasible. After the blue water network is settled, the green landscape network which is surrounded by it could be planned. The direction which is transversal to the river (Y-axis) should run through the transportation axis so that the urban texture could stretch in an ecological way. Therefore, it is suggested that the work of the planning bureau and river bureau should be coordinated. The Z-axis research is on the section view of the river, especially on the Yellow River’s special feature of being a perched river. Based on water control safety demands, river parks could be constructed on the embankment buffer zone, whereas many kinds of ornamental trees could be used to build the buffer zone. City Crossing River is a typical case where we make use of landscaping to build a symbiotic relationship between the urban landscape architecture and the environment. The local environment should be respected in the process of city expansion. The planning order of "Benefit- Flood Control Safety" should be replaced by "Flood Control Safety - Landscape Architecture- People - Benefit".Keywords: blue-green landscape network, city crossing river, four-dimensional analysis method, planning order
Procedia PDF Downloads 15718702 Review: Wavelet New Tool for Path Loss Prediction
Authors: Danladi Ali, Abdullahi Mukaila
Abstract:
In this work, GSM signal strength (power) was monitored in an indoor environment. Samples of the GSM signal strength was measured on mobile equipment (ME). One-dimensional multilevel wavelet is used to predict the fading phenomenon of the GSM signal measured and neural network clustering to determine the average power received in the study area. The wavelet prediction revealed that the GSM signal is attenuated due to the fast fading phenomenon which fades about 7 times faster than the radio wavelength while the neural network clustering determined that -75dBm appeared more frequently followed by -85dBm. The work revealed that significant part of the signal measured is dominated by weak signal and the signal followed more of Rayleigh than Gaussian distribution. This confirmed the wavelet prediction.Keywords: decomposition, clustering, propagation, model, wavelet, signal strength and spectral efficiency
Procedia PDF Downloads 44618701 Engineered Bio-Coal from Pressed Seed Cake for Removal of 2, 4, 6-Trichlorophenol with Parametric Optimization Using Box–Behnken Method
Authors: Harsha Nagar, Vineet Aniya, Alka Kumari, Satyavathi B.
Abstract:
In the present study, engineered bio-coal was produced from pressed seed cake, which otherwise is non-edible in origin. The production process involves a slow pyrolysis wherein, based on the optimization of process parameters; a substantial reduction in H/C and O/C of 77% was achieved with respect to the original ratio of 1.67 and 0.8, respectively. The bio-coal, so the product was found to have a higher heating value of 29899 kJ/kg with surface area 17 m²/g and pore volume of 0.002 cc/g. The functional characterization of bio-coal and its subsequent modification was carried out to enhance its active sites, which were further used as an adsorbent material for removal of 2,4,6-Trichlorophenol (2,4,6-TCP) herbicide from the aqueous stream. The point of zero charge for the bio-coal was found to be pH < 3 where its surface is positively charged and attracts anions resulting in the maximum 2, 4, 6-TCP adsorption at pH 2.0. The parametric optimization of the adsorption process was studied based on the Box-Behken design with the desirability approach. The results showed optimum values of adsorption efficiency of 74.04% and uptake capacity of 118.336 mg/g for an initial metal concentration of 250 mg/l and particle size of 0.12 mm at pH 2.0 and 1 g/L of bio-coal loading. Negative Gibbs free energy change values indicated the feasibility of 2,4,6-TCP adsorption on biochar. Decreasing the ΔG values with the rise in temperature indicated high favourability at low temperatures. The equilibrium modeling results showed that both isotherms (Langmuir and Freundlich) accurately predicted the equilibrium data, which may be attributed to the different affinity of the functional groups of bio-coal for 2,4,6-TCP removal. The possible mechanism for 2,4,6-TCP adsorption is found to be physisorption (pore diffusion, p*_p electron donor-acceptor interaction, H-bonding, and van der Waals dispersion forces) and chemisorption (phenolic and amine groups chemical bonding) based on the kinetics data modeling.Keywords: engineered biocoal, 2, 4, 6-trichlorophenol, box behnken design, biosorption
Procedia PDF Downloads 11618700 Land Cover Remote Sensing Classification Advanced Neural Networks Supervised Learning
Authors: Eiman Kattan
Abstract:
This study aims to evaluate the impact of classifying labelled remote sensing images conventional neural network (CNN) architecture, i.e., AlexNet on different land cover scenarios based on two remotely sensed datasets from different point of views such as the computational time and performance. Thus, a set of experiments were conducted to specify the effectiveness of the selected convolutional neural network using two implementing approaches, named fully trained and fine-tuned. For validation purposes, two remote sensing datasets, AID, and RSSCN7 which are publicly available and have different land covers features were used in the experiments. These datasets have a wide diversity of input data, number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in training, validation, and testing. As a result, the fully trained approach has achieved a trivial result for both of the two data sets, AID and RSSCN7 by 73.346% and 71.857% within 24 min, 1 sec and 8 min, 3 sec respectively. However, dramatic improvement of the classification performance using the fine-tuning approach has been recorded by 92.5% and 91% respectively within 24min, 44 secs and 8 min 41 sec respectively. The represented conclusion opens the opportunities for a better classification performance in various applications such as agriculture and crops remote sensing.Keywords: conventional neural network, remote sensing, land cover, land use
Procedia PDF Downloads 36918699 A Survey in Techniques for Imbalanced Intrusion Detection System Datasets
Authors: Najmeh Abedzadeh, Matthew Jacobs
Abstract:
An intrusion detection system (IDS) is a software application that monitors malicious activities and generates alerts if any are detected. However, most network activities in IDS datasets are normal, and the relatively few numbers of attacks make the available data imbalanced. Consequently, cyber-attacks can hide inside a large number of normal activities, and machine learning algorithms have difficulty learning and classifying the data correctly. In this paper, a comprehensive literature review is conducted on different types of algorithms for both implementing the IDS and methods in correcting the imbalanced IDS dataset. The most famous algorithms are machine learning (ML), deep learning (DL), synthetic minority over-sampling technique (SMOTE), and reinforcement learning (RL). Most of the research use the CSE-CIC-IDS2017, CSE-CIC-IDS2018, and NSL-KDD datasets for evaluating their algorithms.Keywords: IDS, imbalanced datasets, sampling algorithms, big data
Procedia PDF Downloads 32318698 Exploring De-Fi through 3 Case Studies: Transparency, Social Impact, and Regulation
Authors: Dhaksha Vivekanandan
Abstract:
DeFi is a network that avoids reliance on financial intermediaries through its peer-to-peer financial network. DeFi operates outside of government control; hence it is important for us to understand its impacts. This study employs a literature review to understand DeFi and its emergence, as well as its implications on transparency, social impact, and regulation. Further, 3 case studies are analysed within the context of these categories. DeFi’s provision of increased transparency poses environmental and storage costs and can lead to user privacy being endangered. DeFi allows for the provision of entrepreneurial incentives and protection against monetary censorship and capital control. Despite DeFi's transparency issues and volatility costs, it has huge potential to reduce poverty; however, regulation surrounding DeFi still requires further tightening by governments.Keywords: DeFi, transparency, regulation, social impact
Procedia PDF Downloads 8118697 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria
Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter
Abstract:
Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis
Procedia PDF Downloads 7418696 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat
Abstract:
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.Keywords: image segmentation, clustering, GUI, 2D MRI
Procedia PDF Downloads 37418695 Solving Nonconvex Economic Load Dispatch Problem Using Particle Swarm Optimization with Time Varying Acceleration Coefficients
Authors: Alireza Alizadeh, Hossein Ghadimi, Oveis Abedinia, Noradin Ghadimi
Abstract:
A Particle Swarm Optimization with Time Varying Acceleration Coefficients (PSO-TVAC) is proposed to determine optimal economic load dispatch (ELD) problem in this paper. The proposed methodology easily takes care of solving non-convex economic load dispatch problems along with different constraints like transmission losses, dynamic operation constraints and prohibited operating zones. The proposed approach has been implemented on the 3-machines 6-bus, IEEE 5-machines 14-bus, IEEE 6-machines 30-bus systems and 13 thermal units power system. The proposed technique is compared to solve the ELD problem with hybrid approach by using the valve-point effect. The comparison results prove the capability of the proposed method giving significant improvements in the generation cost for the economic load dispatch problem.Keywords: PSO-TVAC, economic load dispatch, non-convex cost function, prohibited operating zone, transmission losses
Procedia PDF Downloads 38518694 Artificial Intelligence Based Meme Generation Technology for Engaging Audience in Social Media
Authors: Andrew Kurochkin, Kostiantyn Bokhan
Abstract:
In this study, a new meme dataset of ~650K meme instances was created, a technology of meme generation based on the state of the art deep learning technique - GPT-2 model was researched, a comparative analysis of machine-generated memes and human-created was conducted. We justified that Amazon Mechanical Turk workers can be used for the approximate estimating of users' behavior in a social network, more precisely to measure engagement. It was shown that generated memes cause the same engagement as human memes that produced low engagement in the social network (historically). Thus, generated memes are less engaging than random memes created by humans.Keywords: content generation, computational social science, memes generation, Reddit, social networks, social media interaction
Procedia PDF Downloads 13718693 Effect of Methylammonium Lead Iodide Layer Thickness on Performance of Perovskite Solar Cell
Authors: Chadel Meriem, Bensmaine Souhila, Chadel Asma, Bouchikhi Chaima
Abstract:
The Methylammonium Lead Iodide CH3NH3PbI3 is used in solar cell as an absorber layer since 2009. The efficiencies of these technologies have increased from 3.8% in 2009 to 29.15% in 2019. So, these technologies Methylammonium Lead Iodide is promising for the development of high-performance photovoltaic applications. Due to the high cost of the experimental of the solar cells, researchers have turned to other methods like numerical simulation. In this work, we evaluate and simulate the performance of a CH₃NH₃PbI₃ lead-based perovskite solar cell when the amount of materials of absorber layer is reduced. We show that the reducing of thickness the absorber layer influent on performance of the solar cell. For this study, the one-dimensional simulation program, SCAPS-1D, is used to investigate and analyze the performance of the perovskite solar cell. After optimization, maximum conversion efficiency was achieved with 300 nm in absorber layer.Keywords: methylammonium lead Iodide, perovskite solar cell, caracteristic J-V, effeciency
Procedia PDF Downloads 6518692 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 18718691 High Resolution Image Generation Algorithm for Archaeology Drawings
Authors: Xiaolin Zeng, Lei Cheng, Zhirong Li, Xueping Liu
Abstract:
Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.Keywords: archaeology drawings, digital heritage, image generation, deep learning
Procedia PDF Downloads 5618690 Statistical Optimization of Distribution Coefficient for Reactive Extraction of Lactic Acid Using Tri-n-octyl Amine in Oleyl Alcohol and n-Hexane
Authors: Avinash Thakur, Parmjit S. Panesar, Manohar Singh
Abstract:
The distribution coefficient, KD for the reactive extraction of lactic acid from aqueous solutions of lactic acid using 10-30% (v/v) tri-n-octyl amine (extractant) dissolved in n-hexane (inert diluent) and 20% (v/v) oleyl alcohol (modifier) was optimized by using response surface methodology (RSM). A three level Box-Behnken design was employed for experimental design, analysis of the results and to depict the combined interactive effect of seven independent variables, viz lactic acid concentration (cl), pH, TOA concentration in organic phase (ψ), treat ratio (φ), temperature (T), agitation speed (ω) and batch agitation time (τ) on distribution coefficient of lactic acid. The regression analysis recommended that the quadratic model is significant (R2 and adjusted R2 are 98.72 % and 98.69 % respectively) for analysis. A numerical optimization had resulted in maximum lactic acid distribution coefficient (KD) of 3.16 at the optimized values for test variables, cl, pH, ψ, φ, T, ω and τ as 0.15 [M], 3.0, 22.75% (v/v), 1.0 (v/v), 26°C, 145 rpm and 23 min respectively. A good agreement between the predicted and experimentally obtained values for distribution coefficient using the optimized conditions was exhibited.Keywords: Distribution coefficient, tri-n-octylamine, lactic acid, response surface methodology
Procedia PDF Downloads 45618689 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis
Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan
Abstract:
Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis
Procedia PDF Downloads 8518688 Prediction on Housing Price Based on Deep Learning
Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang
Abstract:
In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.Keywords: deep learning, convolutional neural network, LSTM, housing prediction
Procedia PDF Downloads 30518687 Statistical Modeling for Permeabilization of a Novel Yeast Isolate for β-Galactosidase Activity Using Organic Solvents
Authors: Shweta Kumari, Parmjit S. Panesar, Manab B. Bera
Abstract:
The hydrolysis of lactose using β-galactosidase is one of the most promising biotechnological applications, which has wide range of potential applications in food processing industries. However, due to intracellular location of the yeast enzyme, and expensive extraction methods, the industrial applications of enzymatic hydrolysis processes are being hampered. The use of permeabilization technique can help to overcome the problems associated with enzyme extraction and purification of yeast cells and to develop the economically viable process for the utilization of whole cell biocatalysts in food industries. In the present investigation, standardization of permeabilization process of novel yeast isolate was carried out using a statistical model approach known as Response Surface Methodology (RSM) to achieve maximal b-galactosidase activity. The optimum operating conditions for permeabilization process for optimal β-galactosidase activity obtained by RSM were 1:1 ratio of toluene (25%, v/v) and ethanol (50%, v/v), 25.0 oC temperature and treatment time of 12 min, which displayed enzyme activity of 1.71 IU /mg DW.Keywords: β-galactosidase, optimization, permeabilization, response surface methodology, yeast
Procedia PDF Downloads 25318686 Cash Flow Optimization on Synthetic CDOs
Authors: Timothée Bligny, Clément Codron, Antoine Estruch, Nicolas Girodet, Clément Ginet
Abstract:
Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.Keywords: synthetic collateralized debt obligation (CDO), credit default swap (CDS), cash flow optimization, probability of default, default correlation, strategies, simulation, simplex
Procedia PDF Downloads 27118685 Ultra-Rapid and Efficient Immunomagnetic Separation of Listeria Monocytogenes from Complex Samples in High-Gradient Magnetic Field Using Disposable Magnetic Microfluidic Device
Authors: L. Malic, X. Zhang, D. Brassard, L. Clime, J. Daoud, C. Luebbert, V. Barrere, A. Boutin, S. Bidawid, N. Corneau, J. Farber, T. Veres
Abstract:
The incidence of infections caused by foodborne pathogens such as Listeria monocytogenes (L. monocytogenes) poses a great potential threat to public health and safety. These issues are further exacerbated by legal repercussions due to “zero tolerance” food safety standards adopted in developed countries. Unfortunately, a large number of related disease outbreaks are caused by pathogens present in extremely low counts currently undetectable by available techniques. The development of highly sensitive and rapid detection of foodborne pathogens is therefore crucial, and requires robust and efficient pre-analytical sample preparation. Immunomagnetic separation is a popular approach to sample preparation. Microfluidic chips combined with external magnets have emerged as viable high throughput methods. However, external magnets alone are not suitable for the capture of nanoparticles, as very strong magnetic fields are required. Devices that incorporate externally applied magnetic field and microstructures of a soft magnetic material have thus been used for local field amplification. Unfortunately, very complex and costly fabrication processes used for integration of soft magnetic materials in the reported proof-of-concept devices would prohibit their use as disposable tools for food and water safety or diagnostic applications. We present a sample preparation magnetic microfluidic device implemented in low-cost thermoplastic polymers using fabrication techniques suitable for mass-production. The developed magnetic capture chip (M-chip) was employed for rapid capture and release of L. monocytogenes conjugated to immunomagnetic nanoparticles (IMNs) in buffer and beef filtrate. The M-chip relies on a dense array of Nickel-coated high-aspect ratio pillars for capture with controlled magnetic field distribution and a microfluidic channel network for sample delivery, waste, wash and recovery. The developed Nickel-coating process and passivation allows generation of switchable local perturbations within the uniform magnetic field generated with a pair of permanent magnets placed at the opposite edges of the chip. This leads to strong and reversible trapping force, wherein high local magnetic field gradients allow efficient capture of IMNs conjugated to L. monocytogenes flowing through the microfluidic chamber. The experimental optimization of the M-chip was performed using commercially available magnetic microparticles and fabricated silica-coated iron-oxide nanoparticles. The fabricated nanoparticles were optimized to achieve the desired magnetic moment and surface functionalization was tailored to allow efficient capture antibody immobilization. The integration, validation and further optimization of the capture and release protocol is demonstrated using both, dead and live L. monocytogenes through fluorescence microscopy and plate- culture method. The capture efficiency of the chip was found to vary as function of listeria to nanoparticle concentration ratio. The maximum capture efficiency of 30% was obtained and the 24-hour plate-culture method allowed the detection of initial sample concentration of only 16 cfu/ml. The device was also very efficient in concentrating the sample from a 10 ml initial volume. Specifically, 280% concentration efficiency was achieved in 17 minutes only, demonstrating the suitability of the system for food safety applications. In addition, flexible design and low-cost fabrication process will allow rapid sample preparation for applications beyond food and water safety, including point-of-care diagnosis.Keywords: array of pillars, bacteria isolation, immunomagnetic sample preparation, polymer microfluidic device
Procedia PDF Downloads 27918684 Using Data from Foursquare Web Service to Represent the Commercial Activity of a City
Authors: Taras Agryzkov, Almudena Nolasco-Cirugeda, Jose L. Oliver, Leticia Serrano-Estrada, Leandro Tortosa, Jose F. Vicent
Abstract:
This paper aims to represent the commercial activity of a city taking as source data the social network Foursquare. The city of Murcia is selected as case study, and the location-based social network Foursquare is the main source of information. After carrying out a reorganisation of the user-generated data extracted from Foursquare, it is possible to graphically display on a map the various city spaces and venues –especially those related to commercial, food and entertainment sector businesses. The obtained visualisation provides information about activity patterns in the city of Murcia according to the people`s interests and preferences and, moreover, interesting facts about certain characteristics of the town itself.Keywords: social networks, spatial analysis, data visualization, geocomputation, Foursquare
Procedia PDF Downloads 42518683 Power Grid Line Ampacity Forecasting Based on a Long-Short-Term Memory Neural Network
Authors: Xiang-Yao Zheng, Jen-Cheng Wang, Joe-Air Jiang
Abstract:
Improving the line ampacity while using existing power grids is an important issue that electricity dispatchers are now facing. Using the information provided by the dynamic thermal rating (DTR) of transmission lines, an overhead power grid can operate safely. However, dispatchers usually lack real-time DTR information. Thus, this study proposes a long-short-term memory (LSTM)-based method, which is one of the neural network models. The LSTM-based method predicts the DTR of lines using the weather data provided by Central Weather Bureau (CWB) of Taiwan. The possible thermal bottlenecks at different locations along the line and the margin of line ampacity can be real-time determined by the proposed LSTM-based prediction method. A case study that targets the 345 kV power grid of TaiPower in Taiwan is utilized to examine the performance of the proposed method. The simulation results show that the proposed method is useful to provide the information for the smart grid application in the future.Keywords: electricity dispatch, line ampacity prediction, dynamic thermal rating, long-short-term memory neural network, smart grid
Procedia PDF Downloads 28018682 A Process Model for Online Trip Reservation System
Authors: Sh. Wafa, M. Alanoud, S. Liyakathunisa
Abstract:
Online booking for a trip or hotel has become an indispensable traveling tool today, people tend to be more interested in selecting air flight travel as their first choice when going for a long trip. People's shopping behavior has greatly changed by the advent of social network. Traditional ticket booking methods are considered as outdated with the advancement in tools and technology. Web based booking framework is an 'absolute necessity to have' for any visit or movement business that is investing heaps of energy noting telephone calls, sending messages or considering employing more staff. In this paper, we propose a process model for online trip reservation for our designed web application. Our proposed system will be highly beneficial and helps in reduction in time and cost for customers.Keywords: trip, hotel, reservation, process model, time, cost, web app
Procedia PDF Downloads 21318681 Performance Evaluation and Dear Based Optimization on Machining Leather Specimens to Reduce Carbonization
Authors: Khaja Moiduddin, Tamer Khalaf, Muthuramalingam Thangaraj
Abstract:
Due to the variety of benefits over traditional cutting techniques, the usage of laser cutting technology has risen substantially in recent years. Hot wire machining can cut the leather in the required shape by controlling the wire by generating thermal energy. In the present study, an attempt has been made to investigate the effects of performance measures in the hot wire machining process on cutting leather specimens. Carbonization and material removal rates were considered as quality indicators. Burning leather during machining might cause carbon particles, reducing product quality. Minimizing the effect of carbon particles is crucial for assuring operator and environmental safety, health, and product quality. Hot wire machining can efficiently cut the specimens by controlling the current through it. Taguchi- DEAR-based optimization was also performed in the process, which resulted in a required Carbonization and material removal rate. Using the DEAR approach, the optimal parameters of the present study were found with 3.7% prediction error accuracy.Keywords: cabronization, leather, MRR, current
Procedia PDF Downloads 6218680 Analysis of a Single Motor Finger Mechanism for a Prosthetic Hand
Authors: Shaukat Ali, Kanber Sedef, Mustafa Yilmaz
Abstract:
This work analyzes a finger mechanism for a prosthetic hand that will help in improving the living standards of people who have lost their hands for a variety of reasons. The finger mechanism is single degree of freedom and hence has advantages such as compact size, reduced mass and less energy consumption. The proposed finger mechanism is a six bar linkage actuated by a single motor. The kinematic, static and dynamic analyses have been done by using the conventional methods of mechanism analysis. The kinematic results present the motion of the proposed finger mechanism and location of the fingertip. The static and dynamic analyses provide the useful information about the gripping force at the fingertip for various configurations and the selection of motor that will move the finger over its range of configuration. This single motor finger mechanism is simple and resembles the human finger’s motion suitable for grasping operation. This study can be used in the optimization of geometrical parameters of the proposed mechanism to obtain the desired configurations with minimum torque and enhanced griping.Keywords: dynamics, finger mechanism, grasping, kinematics
Procedia PDF Downloads 35718679 Automated Detection of Related Software Changes by Probabilistic Neural Networks Model
Authors: Yuan Huang, Xiangping Chen, Xiaonan Luo
Abstract:
Current software are continuously updating. The change between two versions usually involves multiple program entities (e.g., packages, classes, methods, attributes) with multiple purposes (e.g., changed requirements, bug fixing). It is hard for developers to understand which changes are made for the same purpose. Whether two changes are related is not decided by the relationship between this two entities in the program. In this paper, we summarized 4 coupling rules(16 instances) and 4 state-combination types at the class, method and attribute levels for software change. Related Change Vector (RCV) are defined based on coupling rules and state-combination types, and applied to classify related software changes by using Probabilistic Neural Network during a software updating.Keywords: PNN, related change, state-combination, logical coupling, software entity
Procedia PDF Downloads 43618678 Formulation Development, Process Optimization and Comparative study of Poorly Compressible Drugs Ibuprofen, Acetaminophen Using Direct Compression and Top Spray Granulation Technique
Authors: Abhishek Pandey
Abstract:
Ibuprofen and Acetaminophen is widely used as prescription & non-prescription medicine. Ibuprofen mainly used in the treatment of mild to moderate pain related to headache, migraine, postoperative condition and in the management of spondylitis, osteoarthritis and rheumatoid arthritis. Acetaminophen is used as an analgesic and antipyretic drug. Ibuprofen having high tendency of sticking to punches of tablet punching machine while Acetaminophen is not ordinarily compressible to tablet formulation because Acetaminophen crystals are very hard and brittle in nature and fracture very easily when compressed producing capping and laminating tablet defects therefore wet granulation method is used to make them compressible. The aim of study was to prepare Ibuprofen and Acetaminophen tablets by direct compression and top spray granulation technique. In this Investigation tablets were prepared by using directly compressible grade excipients. Dibasic calcium phosphate, lactose anhydrous (DCL21), microcrystalline cellulose (Avicel PH 101). In order to obtain best or optimized formulation, nine different formulations were generated among them batch F7, F8, F9 shows good results and within the acceptable limit. Formulation (F7) selected as optimize product on the basis of dissolution study. Furtherly, directly compressible granules of both drugs were prepared by using top spray granulation technique in fluidized bed processor equipment and compressed .In order to obtain best product process optimization was carried out by performing four trials in which various parameters like inlet air temperature, spray rate, peristaltic pump rpm, % LOD, properties of granules, blending time and hardness were optimized. Batch T3 coined as optimized batch on the basis physical & chemical evaluation. Finally formulations prepared by both techniques were compared.Keywords: direct compression, top spray granulation, process optimization, blending time
Procedia PDF Downloads 361