Search results for: efficient frontier analysis
9647 The Documentary Analysis of Meta-Analysis Research in Violence of Media
Authors: Proud Arunrangsiwed
Abstract:
The part of “future direction” in the findings of meta-analysis could provide the great direction to conduct the future studies. This study, “The Documentary Analysis of Meta-Analysis Research in Violence of Media” would conclude “future directions” out of 10 meta-analysis papers. The purposes of this research are to find an appropriate research design or an appropriate methodology for the future research related to the topic, “violence of media”. Further research needs to explore by longitudinal and experimental design, and also needs to have a careful consideration about age effects, time spent effects, enjoyment effects and ordinary lifestyle of each media consumer.
Keywords: Aggressive, future direction, meta-analysis, media, violence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27039646 Analysis of Energy Consumption Based on Household Appliances in Jodhpur, India
Authors: A. Kumar, V. Devadas
Abstract:
Energy is the basic element for any country’s economic development. India is one of the most populated countries, and is dependent on fossil fuel and nuclear-based energy generation. The energy sector faces huge challenges and is dependent on the import of energy from neighboring countries to fulfill the gap in demand and supply. India has huge setbacks for efficient energy generation, distribution, and consumption, therefore they consume more quantity of energy to produce the same amount of Gross Domestic Product (GDP) compared to the developed countries. Technology and technique use, availability, and affordability in the various sectors are varying according to their economic status. In this paper, an attempt is made to quantify the domestic electrical energy consumption in Jodhpur, India. Survey research methods have been employed and stratified sampling technique-based households were chosen for conducting the investigation. Pre-tested survey schedules are used to investigate the grassroots level study. The collected data are analyzed by employing statistical techniques. Thereafter, a multiple regression model is developed to understand the functions of total electricity consumption in the domestic sector corresponding to other independent variables including electrical appliances, age of the building, household size, education, etc. The study resulted in identifying the governing variable in energy consumption at the household level and their relationship with the efficiency of household-based electrical and energy appliances. The analysis is concluded with the recommendation for optimizing the gap in peak electrical demand and supply in the domestic sector.
Keywords: Appliance, consumption, electricity, households.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4739645 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit
Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu
Abstract:
Efficient matrix-vector multiplication with diagonal sparse matrices is pivotal in a multitude of computational domains, ranging from scientific simulations to machine learning workloads. When encoded in the conventional Diagonal (DIA) format, these matrices often induce computational overheads due to extensive zero-padding and non-linear memory accesses, which can hamper the computational throughput, and elevate the usage of precious compute and memory resources beyond necessity. The ’DIA-Adaptive’ approach, a methodological enhancement introduced in this paper, confronts these challenges head-on by leveraging the advanced parallel instruction sets embedded within Machine Learning Units (MLUs). This research presents a thorough analysis of the DIA-Adaptive scheme’s efficacy in optimizing Sparse Matrix-Vector Multiplication (SpMV) operations. The scope of the evaluation extends to a variety of hardware architectures, examining the repercussions of distinct thread allocation strategies and cluster configurations across multiple storage formats. A dedicated computational kernel, intrinsic to the DIA-Adaptive approach, has been meticulously developed to synchronize with the nuanced performance characteristics of MLUs. Empirical results, derived from rigorous experimentation, reveal that the DIA-Adaptive methodology not only diminishes the performance bottlenecks associated with the DIA format but also exhibits pronounced enhancements in execution speed and resource utilization. The analysis delineates a marked improvement in parallelism, showcasing the DIA-Adaptive scheme’s ability to adeptly manage the interplay between storage formats, hardware capabilities, and algorithmic design. The findings suggest that this approach could set a precedent for accelerating SpMV tasks, thereby contributing significantly to the broader domain of high-performance computing and data-intensive applications.
Keywords: Adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2349644 Proposal for a Ultra Low Voltage NAND gate to withstand Power Analysis Attacks
Authors: Omid Mirmotahari, Yngvar Berg
Abstract:
In this paper we promote the Ultra Low Voltage (ULV) NAND gate to replace either partly or entirely the encryption block of a design to withstand power analysis attack.
Keywords: Differential Power Analysis (DPA), Low Voltage (LV), Ultra Low Voltage (ULV), Floating-Gate (FG), supply current analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19549643 Application of a New Hybrid Optimization Algorithm on Cluster Analysis
Authors: T. Niknam, M. Nayeripour, B.Bahmani Firouzi
Abstract:
Clustering techniques have received attention in many areas including engineering, medicine, biology and data mining. The purpose of clustering is to group together data points, which are close to one another. The K-means algorithm is one of the most widely used techniques for clustering. However, K-means has two shortcomings: dependency on the initial state and convergence to local optima and global solutions of large problems cannot found with reasonable amount of computation effort. In order to overcome local optima problem lots of studies done in clustering. This paper is presented an efficient hybrid evolutionary optimization algorithm based on combining Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), called PSO-ACO, for optimally clustering N object into K clusters. The new PSO-ACO algorithm is tested on several data sets, and its performance is compared with those of ACO, PSO and K-means clustering. The simulation results show that the proposed evolutionary optimization algorithm is robust and suitable for handing data clustering.
Keywords: Ant Colony Optimization (ACO), Data clustering, Hybrid evolutionary optimization algorithm, K-means clustering, Particle Swarm Optimization (PSO).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21989642 A New Method for Computing the Inverse Ideal in a Coordinate Ring
Authors: Abdolali Basiri
Abstract:
In this paper we present an efficient method for inverting an ideal in the ideal class group of a Cab curve by extending the method which is presented in [3]. More precisely we introduce a useful generator for the inverse ideal as a K[X]-module.
Keywords: Cab Curves, Ideal Class Group
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10489641 Improvement on a CNC Gantry Machine Structure Design for Higher Machining Speed Capability
Authors: Ahmed A. D. Sarhan, S. R. Besharaty, Javad Akbaria, M. Hamdi
Abstract:
The capability of CNC gantry milling machines in manufacturing long components has caused the expanded use of such machines. On the other hand, the machines’ gantry rigidity can reduce under severe loads or vibration during operation. Indeed, the quality of machining is dependent on the machine’s dynamic behavior throughout the operating process. For this reason, these types of machines have always been used widely and are not efficient. Therefore, they can usually be employed for rough machining and may not produce adequate surface finishing. In this paper, a CNC gantry milling machine with the potential to produce good surface finish has been designed and analyzed. The lowest natural frequency of this machine is 202 Hz corresponding to 12000 rpm at all motion amplitudes with a full range of suitable frequency responses. Meanwhile, the maximum deformation under dead loads for the gantry machine is 0.565*m, indicating that this machine tool is capable of producing higher product quality.
Keywords: Finite element, frequency response, gantry design, gantry machine, static and dynamic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 60349640 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn
Abstract:
The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.
Keywords: Data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5169639 Object Identification with Color, Texture, and Object-Correlation in CBIR System
Authors: Awais Adnan, Muhammad Nawaz, Sajid Anwar, Tamleek Ali, Muhammad Ali
Abstract:
Needs of an efficient information retrieval in recent years in increased more then ever because of the frequent use of digital information in our life. We see a lot of work in the area of textual information but in multimedia information, we cannot find much progress. In text based information, new technology of data mining and data marts are now in working that were started from the basic concept of database some where in 1960. In image search and especially in image identification, computerized system at very initial stages. Even in the area of image search we cannot see much progress as in the case of text based search techniques. One main reason for this is the wide spread roots of image search where many area like artificial intelligence, statistics, image processing, pattern recognition play their role. Even human psychology and perception and cultural diversity also have their share for the design of a good and efficient image recognition and retrieval system. A new object based search technique is presented in this paper where object in the image are identified on the basis of their geometrical shapes and other features like color and texture where object-co-relation augments this search process. To be more focused on objects identification, simple images are selected for the work to reduce the role of segmentation in overall process however same technique can also be applied for other images.Keywords: Object correlation, Geometrical shape, Color, texture, features, contents.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20289638 A Survey of the Applications of Sentiment Analysis
Authors: Pingping Lin, Xudong Luo
Abstract:
Natural language often conveys emotions of speakers. Therefore, sentiment analysis on what people say is prevalent in the field of natural language process and has great application value in many practical problems. Thus, to help people understand its application value, in this paper, we survey various applications of sentiment analysis, including the ones in online business and offline business as well as other types of its applications. In particular, we give some application examples in intelligent customer service systems in China. Besides, we compare the applications of sentiment analysis on Twitter, Weibo, Taobao and Facebook, and discuss some challenges. Finally, we point out the challenges faced in the applications of sentiment analysis and the work that is worth being studied in the future.Keywords: Natural language processing, sentiment analysis, application, online comments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9539637 Validity Domains of Beams Behavioural Models: Efficiency and Reduction with Artificial Neural Networks
Authors: Keny Ordaz-Hernandez, Xavier Fischer, Fouad Bennis
Abstract:
In a particular case of behavioural model reduction by ANNs, a validity domain shortening has been found. In mechanics, as in other domains, the notion of validity domain allows the engineer to choose a valid model for a particular analysis or simulation. In the study of mechanical behaviour for a cantilever beam (using linear and non-linear models), Multi-Layer Perceptron (MLP) Backpropagation (BP) networks have been applied as model reduction technique. This reduced model is constructed to be more efficient than the non-reduced model. Within a less extended domain, the ANN reduced model estimates correctly the non-linear response, with a lower computational cost. It has been found that the neural network model is not able to approximate the linear behaviour while it does approximate the non-linear behaviour very well. The details of the case are provided with an example of the cantilever beam behaviour modelling.
Keywords: artificial neural network, validity domain, cantileverbeam, non-linear behaviour, model reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14289636 Fine-Grained Sentiment Analysis: Recent Progress
Authors: Jie Liu, Xudong Luo, Pingping Lin, Yifan Fan
Abstract:
Facebook, Twitter, Weibo, and other social media and significant e-commerce sites generate a massive amount of online texts, which can be used to analyse people’s opinions or sentiments for better decision-making. So, sentiment analysis, especially the fine-grained sentiment analysis, is a very active research topic. In this paper, we survey various methods for fine-grained sentiment analysis, including traditional sentiment lexicon-based methods, ma-chine learning-based methods, and deep learning-based methods in aspect/target/attribute-based sentiment analysis tasks. Besides, we discuss their advantages and problems worthy of careful studies in the future.
Keywords: sentiment analysis, fine-grained, machine learning, deep learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23979635 Investigation of Compressive Strength of Slag-Based Geopolymer Concrete Incorporated with Rice Husk Ash Using 12M Alkaline Activator
Authors: Festus A. Olutoge, Ahmed A. Akintunde, Anuoluwapo S. Kolade, Aaron A. Chadee, Jovanca Smith
Abstract:
Geopolymer concrete's (GPC) compressive strength was investigated. The GPC was incorporated with rice husk ash (RHA) and ground granulated blast furnace slag (GGBFS), which may have potential in the construction industry to replace Portland limestone cement (PLC) concrete. The sustainable construction binders used were GGBFS and RHA, and a solution of sodium hydroxide (NaOH) and sodium silicate gel (Na2SiO3) was used as the 12-molar alkaline activator. Five GPC mixes comprising fine aggregates, coarse aggregates, GGBS, and RHA, and the alkaline solution in the ratio 2: 2.5: 1: 0.5, respectively, were prepared to achieve grade 40 concrete, and PLC was substituted with GGBFS and RHA in the ratios of 0:100, 25:75, 50:50, 75:25, and 100:0. A control mix was also prepared which comprised of 100% water and 100% PLC as the cementitious material. The GPC mixes were thermally cured at 60-80 ºC in an oven for approximately 24 h. After curing for 7 and 28 days, the compressive strength test results of the hardened GPC samples showed that GPC-Mix #3, comprising 50% GGBFS and 50% RHA, was the most efficient geopolymer mix. The mix had compressive strengths of 35.71 MPa and 47.26 MPa, 19.87% and 8.69% higher than the PLC concrete samples, which had 29.79 MPa and 43.48 MPa after 7 and 28 days, respectively. Therefore, GPC containing GGBFS incorporated with RHA is an efficient method of decreasing the use of PLC in conventional concrete production and reducing the high amounts of CO2 emitted into the atmosphere in the construction industry.
Keywords: Alkaline solution, cementitious material, geopolymer concrete, ground granulated blast furnace slag, rice husk ash.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899634 Fluorescence Quenching as an Efficient Tool for Sensing Application: Study on the Fluorescence Quenching of Naphthalimide Dye by Graphene Oxide
Authors: Sanaz Seraj, Shohre Rouhani
Abstract:
Recently, graphene has gained much attention because of its unique optical, mechanical, electrical, and thermal properties. Graphene has been used as a key material in the technological applications in various areas such as sensors, drug delivery, super capacitors, transparent conductor, and solar cell. It has a superior quenching efficiency for various fluorophores. Based on these unique properties, the optical sensors with graphene materials as the energy acceptors have demonstrated great success in recent years. During quenching, the emission of a fluorophore is perturbed by a quencher which can be a substrate or biomolecule, and due to this phenomenon, fluorophore-quencher has been used for selective detection of target molecules. Among fluorescence dyes, 1,8-naphthalimide is well known for its typical intramolecular charge transfer (ICT) and photo-induced charge transfer (PET) fluorophore, strong absorption and emission in the visible region, high photo stability, and large Stokes shift. Derivatives of 1,8-naphthalimides have found applications in some areas, especially fluorescence sensors. Herein, the fluorescence quenching of graphene oxide has been carried out on a naphthalimide dye as a fluorescent probe model. The quenching ability of graphene oxide on naphthalimide dye was studied by UV-VIS and fluorescence spectroscopy. This study showed that graphene is an efficient quencher for fluorescent dyes. Therefore, it can be used as a suitable candidate sensing platform. To the best of our knowledge, studies on the quenching and absorption of naphthalimide dyes by graphene oxide are rare.
Keywords: Fluorescence, graphene oxide, naphthalimide dye, quenching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7579633 Safe and Efficient Deep Reinforcement Learning Control Model: A Hydroponics Case Study
Authors: Almutasim Billa A. Alanazi, Hal S. Tharp
Abstract:
Safe performance and efficient energy consumption are essential factors for designing a control system. This paper presents a reinforcement learning (RL) model that can be applied to control applications to improve safety and reduce energy consumption. As hardware constraints and environmental disturbances are imprecise and unpredictable, conventional control methods may not always be effective in optimizing control designs. However, RL has demonstrated its value in several artificial intelligence (AI) applications, especially in the field of control systems. The proposed model intelligently monitors a system's success by observing the rewards from the environment, with positive rewards counting as a success when the controlled reference is within the desired operating zone. Thus, the model can determine whether the system is safe to continue operating based on the designer/user specifications, which can be adjusted as needed. Additionally, the controller keeps track of energy consumption to improve energy efficiency by enabling the idle mode when the controlled reference is within the desired operating zone, thus reducing the system energy consumption during the controlling operation. Water temperature control for a hydroponic system is taken as a case study for the RL model, adjusting the variance of disturbances to show the model’s robustness and efficiency. On average, the model showed safety improvement by up to 15% and energy efficiency improvements by 35%-40% compared to a traditional RL model.
Keywords: Control system, hydroponics, machine learning, reinforcement learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079632 Thermal Management of Space Power Electronics using TLM-3D
Authors: R. Hocine, K. Belkacemi, A. Boukortt, A. Boudjemai
Abstract:
When designing satellites, one of the major issues aside for designing its primary subsystems is to devise its thermal. The thermal management of satellites requires solving different sets of issues with regards to modelling. If the satellite is well conditioned all other parts of the satellite will have higher temperature no matter what. The main issue of thermal modelling for satellite design is really making sure that all the other points of the satellite will be within the temperature limits they are designed. The insertion of power electronics in aerospace technologies is becoming widespread and the modern electronic systems used in space must be reliable and efficient with thermal management unaffected by outer space constraints. Many advanced thermal management techniques have been developed in recent years that have application in high power electronic systems. This paper presents a Three-Dimensional Modal Transmission Line Matrix (3D-TLM) implementation of transient heat flow in space power electronics. In such kind of components heat dissipation and good thermal management are essential. Simulation provides the cheapest tool to investigate all aspects of power handling. The 3DTLM has been successful in modeling heat diffusion problems and has proven to be efficient in terms of stability and complex geometry. The results show a three-dimensional visualisation of self-heating phenomena in the device affected by outer space constraints, and will presents possible approaches for increasing the heat dissipation capability of the power modules.
Keywords: Thermal management, conduction, heat dissipation, CTE, ceramic, heat spreader, nodes, 3D-TLM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27859631 ECG Based Reliable User Identification Using Deep Learning
Authors: R. N. Begum, Ambalika Sharma, G. K. Singh
Abstract:
Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and electrocardiogram (ECG)-based systems are unquestionably the best choice due to their appealing inherent characteristics. The Convolutional Neural Networks (CNNs) are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the caliber of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest False Acceptance Rate (FAR) of 0.04% and the highest False Rejection Rate (FRR) of 5%, the best performing network achieved an identification accuracy of 99.94%. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable, but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems.
Keywords: Biometrics, dense networks, identification rate, train/test split ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5419630 Supplier Selection Using Sustainable Criteria in Sustainable Supply Chain Management
Authors: Richa Grover, Rahul Grover, V. Balaji Rao, Kavish Kejriwal
Abstract:
Selection of suppliers is a crucial problem in the supply chain management. On top of that, sustainable supplier selection is the biggest challenge for the organizations. Environment protection and social problems have been of concern to society in recent years, and the traditional supplier selection does not consider about this factor; therefore, this research work focuses on introducing sustainable criteria into the structure of supplier selection criteria. Sustainable Supply Chain Management (SSCM) is the management and administration of material, information, and money flows, as well as coordination among business along the supply chain. All three dimensions - economic, environmental, and social - of sustainable development needs to be taken care of. Purpose of this research is to maximize supply chain profitability, maximize social wellbeing of supply chain and minimize environmental impacts. Problem statement is selection of suppliers in a sustainable supply chain network by ranking the suppliers against sustainable criteria identified. The aim of this research is twofold: To find out what are the sustainable parameters that can be applied to the supply chain, and to determine how these parameters can effectively be used in supplier selection. Multicriteria decision making tools will be used to rank both criteria and suppliers. AHP Analysis will be used to find out ratings for the criteria identified. It is a technique used for efficient decision making. TOPSIS will be used to find out rating for suppliers and then ranking them. TOPSIS is a MCDM problem solving method which is based on the principle that the chosen option should have the maximum distance from the negative ideal solution (NIS) and the minimum distance from the ideal solution.Keywords: Sustainable supply chain management, supplier selection, MCDM tools, AHP analysis, TOPSIS method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34909629 Comparative Analysis of Geographical Routing Protocol in Wireless Sensor Networks
Authors: Rahul Malhotra
Abstract:
The field of wireless sensor networks (WSN) engages a lot of associates in the research community as an interdisciplinary field of interest. This type of network is inexpensive, multifunctionally attributable to advances in micro-electromechanical systems and conjointly the explosion and expansion of wireless communications. A mobile ad hoc network is a wireless network without fastened infrastructure or federal management. Due to the infrastructure-less mode of operation, mobile ad-hoc networks are gaining quality. During this work, we have performed an efficient performance study of the two major routing protocols: Ad hoc On-Demand Distance Vector Routing (AODV) and Dynamic Source Routing (DSR) protocols. We have used an accurate simulation model supported NS2 for this purpose. Our simulation results showed that AODV mitigates the drawbacks of the DSDV and provides better performance as compared to DSDV.
Keywords: Routing protocols, mobility, Mobile Ad-hoc Networks, Ad-hoc On-demand Distance Vector, Dynamic Source Routing, Destination Sequence Distance Vector, Quality of Service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7079628 Low Cost Technique for Measuring Luminance in Biological Systems
Abstract:
In this work, the relationship between the melanin content in a tissue and subsequent absorption of light through that tissue was determined using a digital camera. This technique proved to be simple, cost effective, efficient and reliable. Tissue phantom samples were created using milk and soy sauce to simulate the optical properties of melanin content in human tissue. Increasing the concentration of soy sauce in the milk correlated to an increase in melanin content of an individual. Two methods were employed to measure the light transmitted through the sample. The first was direct measurement of the transmitted intensity using a conventional lux meter. The second method involved correctly calibrating an ordinary digital camera and using image analysis software to calculate the transmitted intensity through the phantom. The results from these methods were then graphically compared to the theoretical relationship between the intensity of transmitted light and the concentration of absorbers in the sample. Conclusions were then drawn about the effectiveness and efficiency of these low cost methods.Keywords: Tissue phantoms, scattering coefficient, albedo, low-cost method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13019627 Analysis of Joint Source Channel LDPC Coding for Correlated Sources Transmission over Noisy Channels
Authors: Marwa Ben Abdessalem, Amin Zribi, Ammar Bouallègue
Abstract:
In this paper, a Joint Source Channel coding scheme based on LDPC codes is investigated. We consider two concatenated LDPC codes, one allows to compress a correlated source and the second to protect it against channel degradations. The original information can be reconstructed at the receiver by a joint decoder, where the source decoder and the channel decoder run in parallel by transferring extrinsic information. We investigate the performance of the JSC LDPC code in terms of Bit-Error Rate (BER) in the case of transmission over an Additive White Gaussian Noise (AWGN) channel, and for different source and channel rate parameters. We emphasize how JSC LDPC presents a performance tradeoff depending on the channel state and on the source correlation. We show that, the JSC LDPC is an efficient solution for a relatively low Signal-to-Noise Ratio (SNR) channel, especially with highly correlated sources. Finally, a source-channel rate optimization has to be applied to guarantee the best JSC LDPC system performance for a given channel.Keywords: AWGN channel, belief propagation, joint source channel coding, LDPC codes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9839626 Energy Efficient Plant Design Approaches: Case Study of the Sample Building of the Energy Efficiency Training Facilities
Authors: Idil Kanter Otcu
Abstract:
Nowadays, due to the growing problems of energy supply and the drastic reduction of natural non-renewable resources, the development of new applications in the energy sector and steps towards greater efficiency in energy consumption are required. Since buildings account for a large share of energy consumption, increasing the structural density of buildings causes an increase in energy consumption. This increase in energy consumption means that energy efficiency approaches to building design and the integration of new systems using emerging technologies become necessary in order to curb this consumption. As new systems for productive usage of generated energy are developed, buildings that require less energy to operate, with rational use of resources, need to be developed. One solution for reducing the energy requirements of buildings is through landscape planning, design and application. Requirements such as heating, cooling and lighting can be met with lower energy consumption through planting design, which can help to achieve more efficient and rational use of resources. Within this context, rather than a planting design which considers only the ecological and aesthetic features of plants, these considerations should also extend to spatial organization whereby the relationship between the site and open spaces in the context of climatic elements and planting designs are taken into account. In this way, the planting design can serve an additional purpose. In this study, a landscape design which takes into consideration location, local climate morphology and solar angle will be illustrated on a sample building project.Keywords: Energy efficiency, landscape design, plant design, xeriscape landscape.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18049625 A Survey of Sentiment Analysis Based on Deep Learning
Authors: Pingping Lin, Xudong Luo, Yifan Fan
Abstract:
Sentiment analysis is a very active research topic. Every day, Facebook, Twitter, Weibo, and other social media, as well as significant e-commerce websites, generate a massive amount of comments, which can be used to analyse peoples opinions or emotions. The existing methods for sentiment analysis are based mainly on sentiment dictionaries, machine learning, and deep learning. The first two kinds of methods rely on heavily sentiment dictionaries or large amounts of labelled data. The third one overcomes these two problems. So, in this paper, we focus on the third one. Specifically, we survey various sentiment analysis methods based on convolutional neural network, recurrent neural network, long short-term memory, deep neural network, deep belief network, and memory network. We compare their futures, advantages, and disadvantages. Also, we point out the main problems of these methods, which may be worthy of careful studies in the future. Finally, we also examine the application of deep learning in multimodal sentiment analysis and aspect-level sentiment analysis.Keywords: Natural language processing, sentiment analysis, document analysis, multimodal sentiment analysis, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20049624 Design Techniques and Implementation of Low Power High-Throughput Discrete Wavelet Transform Tilters for JPEG 2000 Standard
Authors: Grigorios D. Dimitroulakos, N. D. Zervas, N. Sklavos, Costas E. Goutis
Abstract:
In this paper, the implementation of low power, high throughput convolutional filters for the one dimensional Discrete Wavelet Transform and its inverse are presented. The analysis filters have already been used for the implementation of a high performance DWT encoder [15] with minimum memory requirements for the JPEG 2000 standard. This paper presents the design techniques and the implementation of the convolutional filters included in the JPEG2000 standard for the forward and inverse DWT for achieving low-power operation, high performance and reduced memory accesses. Moreover, they have the ability of performing progressive computations so as to minimize the buffering between the decomposition and reconstruction phases. The experimental results illustrate the filters- low power high throughput characteristics as well as their memory efficient operation.Keywords: Discrete Wavelet Transform; JPEG2000 standard; VLSI design; Low Power-Throughput-optimized filters
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12849623 Observations about the Principal Components Analysis and Data Clustering Techniques in the Study of Medical Data
Authors: Cristina G. Dascâlu, Corina Dima Cozma, Elena Carmen Cotrutz
Abstract:
The medical data statistical analysis often requires the using of some special techniques, because of the particularities of these data. The principal components analysis and the data clustering are two statistical methods for data mining very useful in the medical field, the first one as a method to decrease the number of studied parameters, and the second one as a method to analyze the connections between diagnosis and the data about the patient-s condition. In this paper we investigate the implications obtained from a specific data analysis technique: the data clustering preceded by a selection of the most relevant parameters, made using the principal components analysis. Our assumption was that, using the principal components analysis before data clustering - in order to select and to classify only the most relevant parameters – the accuracy of clustering is improved, but the practical results showed the opposite fact: the clustering accuracy decreases, with a percentage approximately equal with the percentage of information loss reported by the principal components analysis.Keywords: Data clustering, medical data, principal components analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15019622 Modeling of Flood Mitigation Structures for Sarawak River Sub-basin Using Info Works River Simulation (RS)
Authors: Rosmina Bustami, Charles Bong, Darrien Mah, Afnie Hamzah, Marina Patrick
Abstract:
The distressing flood scenarios that occur in recent years at the surrounding areas of Sarawak River have left damages of properties and indirectly caused disruptions of productive activities. This study is meant to reconstruct a 100-year flood event that took place in this river basin. Sarawak River Subbasin was chosen and modeled using the one-dimensional hydrodynamic modeling approach using InfoWorks River Simulation (RS), in combination with Geographical Information System (GIS). This produces the hydraulic response of the river and its floodplains in extreme flooding conditions. With different parameters introduced to the model, correlations of observed and simulated data are between 79% – 87%. Using the best calibrated model, flood mitigation structures are imposed along the sub-basin. Analysis is done based on the model simulation results. Result shows that the proposed retention ponds constructed along the sub-basin provide the most efficient reduction of flood by 34.18%.Keywords: Flood, Flood mitigation structure, InfoWorks RS, Retention pond, Sarawak River sub-basin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27179621 Using Stresses Obtained from a Low Detailed FE Model and Located at a Reference Point to Quickly Calculate the Free-edge Stress Intensity Factors of Bonded Joints
Abstract:
The present study focuses on methods allowing a convenient and quick calculation of the SIFs in order to predict the static adhesive strength of bonded joints. A new SIF calculation method is proposed, based on the stresses obtained from a FE model at a reference point located in the adhesive layer at equal distance of the free-edge and of the two interfaces. It is shown that, even limiting ourselves to the two main modes, i.e. the opening and the shearing modes, and using the values of the stresses resulting from a low detailed FE model, an efficient calculation of the peeling stress at adhesive-substrate corners can be obtained by this way. The proposed method is interesting in that it can be the basis of a prediction tool that will allow the designer to quickly evaluate the SIFs characterizing a particular application without developing a detailed analysis.
Keywords: Adhesive layer, bounded joints, free-edge corner, stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11449620 Analysis of Different Resins in Web-to-Flange Joints
Authors: W. F. Ribeiro, J. L. N. Góes
Abstract:
The industrial process adds to engineering wood products features absent in solid wood, with homogeneous structure and reduced defects, improved physical and mechanical properties, bio-deterioration, resistance and better dimensional stability, improving quality and increasing the reliability of structures wood. These features combined with using fast-growing trees, make them environmentally ecological products, ensuring a strong consumer market. The wood I-joists are manufactured by the industrial profiles bonding flange and web, an important aspect of the production of wooden I-beams is the adhesive joint that bonds the web to the flange. Adhesives can effectively transfer and distribute stresses, thereby increasing the strength and stiffness of the composite. The objective of this study is to evaluate different resins in a shear strain specimens with the aim of analyzing the most efficient resin and possibility of using national products, reducing the manufacturing cost. First was conducted a literature review, where established the geometry and materials generally used, then established and analyzed 8 national resins and produced six specimens for each.
Keywords: Engineered wood products, structural resin, wood i-joist.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23169619 Oil Extraction from Microalgae Dunalliela sp. by Polar and Non-Polar Solvents
Authors: A. Zonouzi, M. Auli, M. Javanmard Dakheli, M. A. Hejazi
Abstract:
Microalgae are tiny photosynthetic plants. Nowadays, microalgae are being used as nutrient-dense foods and sources of fine chemicals. They have significant amounts of lipid, carotenoids, vitamins, protein, minerals, chlorophyll, and pigments. Oil extraction from algae is a hotly debated topic currently because introducing an efficient method could decrease the process cost. This can determine the sustainability of algae-based foods. Scientific research works show that solvent extraction using chloroform/methanol (2:1) mixture is one of the efficient methods for oil extraction from algal cells, but both methanol and chloroform are toxic solvents, and therefore, the extracted oil will not be suitable for food application. In this paper, the effect of two food grade solvents (hexane and hexane/ isopropanol) on oil extraction yield from microalgae Dunaliella sp. was investigated and the results were compared with chloroform/methanol (2:1) extraction yield. It was observed that the oil extraction yield using hexane, hexane/isopropanol (3:2) and chloroform/methanol (2:1) mixture were 5.4, 13.93, and 17.5 (% w/w, dry basis), respectively. The fatty acid profile derived from GC illustrated that the palmitic (36.62%), oleic (18.62%), and stearic acids (19.08%) form the main portion of fatty acid composition of microalgae Dunalliela sp. oil. It was concluded that, the addition of isopropanol as polar solvent could increase the extraction yield significantly. Isopropanol solves cell wall phospholipids and enhances the release of intercellular lipids, which improves accessing of hexane to fatty acids.
Keywords: Fatty acid profile, Microalgae, Oil extraction, Polar solvent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21959618 Design Process of the Fixing Pipes in the Guide Pipe Anchor System for Cable-Stayed Bridges
Authors: Jinwoong Choi, Sun-Kyu Park, Sungnam Hong
Abstract:
For the efficient and safe use of the cable-stayed bridge, a design based on the detailed local analysis of the cable anchor system is required. Also, a theoretical design process for the anchor system should be prepared and reviewed. Generally, the size of the fixing pipe in the anchor system is decided according to the specifications prepared by cable-manufacturing companies, and accordingly, there is difficulty determining the initial inner diameters of the fixing pipes. As such, there is no choice but to use the products with the existing sizes. In this study, the existing design process of the fixing pipe, is a type of guide pipe anchor in the cable anchor system, is reviewed, a formula determining the thickness of the fixing pipe is proposed, and the convenience and validity of the suggested equation is compared with the results of the existing designs to verify its convenience and validity.Keywords: Cable-stayed bridge; Guide pipe anchor system; Fixing pipe; Theoretical design process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3309