Search results for: Storage lesion.
150 Knowledge Management and Tourism: An Exploratory Study Applied to Travel Agents in Egypt
Authors: Mohammad Soliman, Mohamed A. Abou-Shouk
Abstract:
Knowledge management focuses on the development, storage, retrieval, and dissemination of information and expertise. It has become an important tool to improve performance in tourism enterprises. This includes improving decision-making, developing customer services, and increasing sales and profits. Knowledge management adoption depends on human, organizational and technological factors. This study aims to explore the concept of knowledge management in travel agents in Egypt. It explores the requirements of adoption and its impact on performance in these agencies. The study targets Category A travel agents in Egypt. The population of the study encompasses Category A travel agents having online presence. An online questionnaire is used to collect data from managers of travel agents. This study is useful for travel agents who are in urgent need to restructure their intermediary role and support their survival in the global travel market. The study sheds light on the requirements of adoption and the expected impact on performance. This could help travel agents identify their situation and the determine the extent to which they are ready to adopt knowledge management. This study is contributing to knowledge by providing insights from the tourism sector in a developing country where the concept of knowledge management is still in its infancy stages.Keywords: Benefits, determinants, Egypt, knowledge management, travel agents.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980149 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: Active Contour, Bayesian, Echocardiographic image, Feature vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712148 Internet Optimization by Negotiating Traffic Times
Authors: Carlos Gonzalez
Abstract:
This paper describes a system to optimize the use of the internet by clients requiring downloading of videos at peak hours. The system consists of a web server belonging to a provider of video contents, a provider of internet communications and a software application running on a client’s computer. The client using the application software will communicate to the video provider a list of the client’s future video demands. The video provider calculates which videos are going to be more in demand for download in the immediate future, and proceeds to request the internet provider the most optimal hours to do the downloading. The times of the downloading will be sent to the application software, which will use the information of pre-established hours negotiated between the video provider and the internet provider to download those videos. The videos will be saved in a special protected section of the user’s hard disk, which will only be accessed by the application software in the client’s computer. When the client is ready to see a video, the application will search the list of current existent videos in the area of the hard disk; if it does exist, it will use this video directly without the need for internet access. We found that the best way to optimize the download traffic of videos is by negotiation between the internet communication provider and the video content provider.
Keywords: Internet optimization, video download, future demands, secure storage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 532147 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic
Authors: Diogen Babuc
Abstract:
The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigen`ere. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e. shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b + 1, it will return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character is not used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it is questionable if it works better than the other methods, from the point of view of execution time and storage space.
Keywords: Ciphering and deciphering, Authentic Algorithm, Polyalphabetic Cipher, Random Key, methods comparison.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 194146 Analysis of Performance of 3T1D Dynamic Random-Access Memory Cell
Authors: Nawang Chhunid, Gagnesh Kumar
Abstract:
On-chip memories consume a significant portion of the overall die space and power in modern microprocessors. On-chip caches depend on Static Random-Access Memory (SRAM) cells and scaling of technology occurring as per Moore’s law. Unfortunately, the scaling is affecting stability, performance, and leakage power which will become major problems for future SRAMs in aggressive nanoscale technologies due to increasing device mismatch and variations. 3T1D Dynamic Random-Access Memory (DRAM) cell is a non-destructive read DRAM cell with three transistors and a gated diode. In 3T1D DRAM cell gated diode (D1) acts as a storage device and also as an amplifier, which leads to fast read access. Due to its high tolerance to process variation, high density, and low cost of memory as compared to 6T SRAM cell, it is universally used by the advanced microprocessor for on chip data and program memory. In the present paper, it has been shown that 3T1D DRAM cell can perform better in terms of fast read access as compared to 6T, 4T, 3T SRAM cells, respectively.Keywords: DRAM cell, read access time, tanner EDA tool write access time and retention time, average power dissipation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1339145 Frequency-Variation Based Method for Parameter Estimation of Transistor Amplifier
Authors: Akash Rathee, Harish Parthasarathy
Abstract:
In this paper, a frequency-variation based method has been proposed for transistor parameter estimation in a commonemitter transistor amplifier circuit. We design an algorithm to estimate the transistor parameters, based on noisy measurements of the output voltage when the input voltage is a sine wave of variable frequency and constant amplitude. The common emitter amplifier circuit has been modelled using the transistor Ebers-Moll equations and the perturbation technique has been used for separating the linear and nonlinear parts of the Ebers-Moll equations. This model of the amplifier has been used to determine the amplitude of the output sinusoid as a function of the frequency and the parameter vector. Then, applying the proposed method to the frequency components, the transistor parameters have been estimated. As compared to the conventional time-domain least squares method, the proposed method requires much less data storage and it results in more accurate parameter estimation, as it exploits the information in the time and frequency domain, simultaneously. The proposed method can be utilized for parameter estimation of an analog device in its operating range of frequencies, as it uses data collected from different frequencies output signals for parameter estimation.Keywords: Perturbation Technique, Parameter estimation, frequency-variation based method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754144 An Agent Based Dynamic Resource Scheduling Model with FCFS-Job Grouping Strategy in Grid Computing
Authors: Raksha Sharma, Vishnu Kant Soni, Manoj Kumar Mishra, Prachet Bhuyan, Utpal Chandra Dey
Abstract:
Grid computing is a group of clusters connected over high-speed networks that involves coordinating and sharing computational power, data storage and network resources operating across dynamic and geographically dispersed locations. Resource management and job scheduling are critical tasks in grid computing. Resource selection becomes challenging due to heterogeneity and dynamic availability of resources. Job scheduling is a NP-complete problem and different heuristics may be used to reach an optimal or near optimal solution. This paper proposes a model for resource and job scheduling in dynamic grid environment. The main focus is to maximize the resource utilization and minimize processing time of jobs. Grid resource selection strategy is based on Max Heap Tree (MHT) that best suits for large scale application and root node of MHT is selected for job submission. Job grouping concept is used to maximize resource utilization for scheduling of jobs in grid computing. Proposed resource selection model and job grouping concept are used to enhance scalability, robustness, efficiency and load balancing ability of the grid.Keywords: Agent, Grid Computing, Job Grouping, Max Heap Tree (MHT), Resource Scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088143 Quad Tree Decomposition Based Analysis of Compressed Image Data Communication for Lossy and Lossless Using WSN
Authors: N. Muthukumaran, R. Ravi
Abstract:
The Quad Tree Decomposition based performance analysis of compressed image data communication for lossy and lossless through wireless sensor network is presented. Images have considerably higher storage requirement than text. While transmitting a multimedia content there is chance of the packets being dropped due to noise and interference. At the receiver end the packets that carry valuable information might be damaged or lost due to noise, interference and congestion. In order to avoid the valuable information from being dropped various retransmission schemes have been proposed. In this proposed scheme QTD is used. QTD is an image segmentation method that divides the image into homogeneous areas. In this proposed scheme involves analysis of parameters such as compression ratio, peak signal to noise ratio, mean square error, bits per pixel in compressed image and analysis of difficulties during data packet communication in Wireless Sensor Networks. By considering the above, this paper is to use the QTD to improve the compression ratio as well as visual quality and the algorithm in MATLAB 7.1 and NS2 Simulator software tool.
Keywords: Image compression, Compression Ratio, Quad tree decomposition, Wireless sensor networks, NS2 simulator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2390142 Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method
Authors: Cheng Zhang, James Marco, Walid Allafi, Truong Q. Dinh, W. D. Widanage
Abstract:
Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.Keywords: Equivalent circuit model, continuous time domain estimation, linear integral filter method, parameter and SOC estimation, recursive least square.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1342141 Neural Networks-Based Acoustic Annoyance Model for Laptop Hard Disk Drive
Authors: Yi Chao Ma, Cheng Siong Chin, Wai Lok Woo
Abstract:
Since the last decade, there has been a rapid growth in digital multimedia, such as high-resolution media files and threedimentional movies. Hence, there is a need for large digital storage such as Hard Disk Drive (HDD). As such, users expect to have a quieter HDD in their laptop. In this paper, a jury test has been conducted on a group of 34 people where 17 of them are students who are the potential consumer, and the remaining are engineers who know the HDD. A total 13 HDD sound samples have been selected from over hundred HDD noise recordings. These samples are selected based on an agreed subjective feeling. The samples are played to the participants using head acoustic playback system, which enabled them to experience as similar as possible the same environment as have been recorded. Analysis has been conducted and the obtained results have indicated different group has different perception over the noises. Two neural network-based acoustic annoyance models are established based on back propagation neural network. Four psychoacoustic metrics, loudness, sharpness, roughness and fluctuation strength, are used as the input of the model, and the subjective evaluation results are taken as the output. The developed models are reasonably accurate in simulating both training and test samples.Keywords: Hard disk drive noise, jury test, neural network model, psychoacoustic annoyance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532140 Comprehensive Regional Drought Assessment Index
Authors: A. Zeynolabedin, M. A. Olyaei, B. Ghiasi
Abstract:
Drought is an inevitable part of the earth’s climate. It occurs regularly with no clear warning and without recognizing borders. In addition, its impact is cumulative and not immediately discernible. Iran is located in a semi-arid region where droughts occur periodically as natural hazard. Standardized Precipitation Index (SPI), Surface Water Supply Index (SWSI), and Palmer Drought Severity Index (PDSI) are three well-known indices which describe drought severity; each has its own advantages and disadvantages and can be used for specific types of drought. These indices take into account some factors such as precipitation, reservoir storage and discharge, temperature, and potential evapotranspiration in determining drought severity. In this paper, first all three indices are calculated in Aharchay river watershed located in northwestern part of Iran in East Azarbaijan province. Next, based on two other important parameters which are groundwater level and solar radiation, two new indices are defined. Finally, considering all five aforementioned indices, a combined drought index (CDI) is presented and calculated for the region. This combined index is based on all the meteorological, hydrological, and agricultural features of the region. The results show that the most severe drought condition in Aharchay watershed happened in Jun, 2004. The result of this study can be used for monitoring drought and prepare for the drought mitigation planning.Keywords: Drought, index variation, regional assessment, monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1259139 Application of GIS-Based Construction Engineering: An Electronic Document Management System
Authors: Mansour N. Jadid
Abstract:
This paper describes the implementation of a GIS to provide decision support for successfully monitoring the movements and storage of materials, hence ensuring that finished products travel from the point of origin to the destination construction site through the supply-chain management (SCM) system. This system ensures the efficient operation of suppliers, manufacturers, and distributors by determining the shortest path from the point of origin to the final destination to reduce construction costs, minimize time, and enhance productivity. These systems are essential to the construction industry because they reduce costs and save time, thereby improve productivity and effectiveness. This study describes a typical supply-chain model and a geographical information system (GIS)-based SCM that focuses on implementing an electronic document management system, which maps the application framework to integrate geodetic support with the supply-chain system. This process provides guidance for locating the nearest suppliers to fill the information needs of project members in different locations. Moreover, this study illustrates the use of a GIS-based SCM as a collaborative tool in innovative methods for implementing Web mapping services, as well as aspects of their integration by generating an interactive GIS for the construction industry platform.
Keywords: Construction, coordinate, engineering, GIS, management, map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449138 Tidal Data Analysis using ANN
Authors: Ritu Vijay, Rekha Govil
Abstract:
The design of a complete expansion that allows for compact representation of certain relevant classes of signals is a central problem in signal processing applications. Achieving such a representation means knowing the signal features for the purpose of denoising, classification, interpolation and forecasting. Multilayer Neural Networks are relatively a new class of techniques that are mathematically proven to approximate any continuous function arbitrarily well. Radial Basis Function Networks, which make use of Gaussian activation function, are also shown to be a universal approximator. In this age of ever-increasing digitization in the storage, processing, analysis and communication of information, there are numerous examples of applications where one needs to construct a continuously defined function or numerical algorithm to approximate, represent and reconstruct the given discrete data of a signal. Many a times one wishes to manipulate the data in a way that requires information not included explicitly in the data, which is done through interpolation and/or extrapolation. Tidal data are a very perfect example of time series and many statistical techniques have been applied for tidal data analysis and representation. ANN is recent addition to such techniques. In the present paper we describe the time series representation capabilities of a special type of ANN- Radial Basis Function networks and present the results of tidal data representation using RBF. Tidal data analysis & representation is one of the important requirements in marine science for forecasting.Keywords: ANN, RBF, Tidal Data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656137 Formulation of Mortars with Marine Sediments
Authors: Nor-Edine Abriak, Mouhamadou Amar, Mahfoud Benzerzour
Abstract:
The transition to a more sustainable economy is directed by a reduction in the consumption of raw materials in equivalent production. The recovery of byproducts and especially the dredged sediment as mineral addition in cements matrix represents an alternative to reduce raw material consumption and construction sector’s carbon footprint. However, the efficient use of sediment requires adequate and optimal treatment. Several processing techniques have so far been applied in order to improve some physicochemical properties. The heat treatment by calcination was effective in removing the organic fraction and activates the pozzolanic properties. In this article, the effect of the optimized heat treatment of marine sediments in the physico-mechanical and environmental properties of mortars are shown. A finding is that the optimal substitution of a portion of cement by treated sediments by calcination at 750 °C helps to maintain or improve the mechanical properties of the cement matrix in comparison with a standard reference mortar. The use of calcined sediment enhances mortar behavior in terms of mechanical strength and durability. From an environmental point of view and life cycle, mortars formulated containing treated sediments are considered inert with respect to the inert waste storage facilities reference (ISDI-France).Keywords: Sediment, calcination, cement, reuse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886136 Human Face Detection and Segmentation using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper we propose a novel method for human face segmentation using the elliptical structure of the human head. It makes use of the information present in the edge map of the image. In this approach we use the fact that the eigenvalues of covariance matrix represent the elliptical structure. The large and small eigenvalues of covariance matrix are associated with major and minor axial lengths of an ellipse. The other elliptical parameters are used to identify the centre and orientation of the face. Since an Elliptical Hough Transform requires 5D Hough Space, the Circular Hough Transform (CHT) is used to evaluate the elliptical parameters. Sparse matrix technique is used to perform CHT, as it squeeze zero elements, and have only a small number of non-zero elements, thereby having an advantage of less storage space and computational time. Neighborhood suppression scheme is used to identify the valid Hough peaks. The accurate position of the circumference pixels for occluded and distorted ellipses is identified using Bresenham-s Raster Scan Algorithm which uses the geometrical symmetry properties. This method does not require the evaluation of tangents for curvature contours, which are very sensitive to noise. The method has been evaluated on several images with different face orientations.Keywords: Circular Hough Transform, Covariance matrix, Eigenvalues, Elliptical Hough Transform, Face segmentation, Raster Scan Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2515135 Comparison and Characterization of Dyneema™ HB-210 and HB-212 for Accelerated UV Aging
Authors: Jonmichael A. Weaver, David A. Miller
Abstract:
Ultra High Molecular Weight Polyethylene (UHMWPE) presents several distinct advantages as a material with a high strength to weight ratio, durability, and neutron stability. Understanding the change in the mechanical performance of UHMWPE due to environmental exposure is key to safety for future applications. Dyneema® HB-210, a 15 µm diameter UHMWPE multi-filament fiber laid up in a polyurethane matrix in [0/ 90]2, with a thickness of 0.17 mm is compared to the same fiber and orientation system, HB-212, with a rubber-based matrix under UV aging conditions. UV aging tests according to ASTM-G154 were performed on both HB-210 and HB-212 to interrogate the change in mechanical properties, as measured through dynamic mechanical analysis and imaged using a scanning electron microscope. These results showed a decrease in both the storage modulus and loss modulus of the aged material compared to the unaged, even though the tan δ slightly increased. Material degradation occurred at a higher rate in Dyneema® HB-212 compared to HB-210. The HB-210 was characterized for the effects of 100 hours of UV aging via dynamic mechanical analysis. Scanning electron microscope images were taken of the HB-210 and HB-212 to identify the primary damage mechanisms in the matrix. Embrittlement and matrix spall were the products of prolonged UV exposure and erosion, resulting in decreased mechanical properties.
Keywords: Composite materials, material characterization, UV aging, UHMWPE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 675134 A Two-Step Approach for Tree-structured XPath Query Reduction
Authors: Minsoo Lee, Yun-mi Kim, Yoon-kyung Lee
Abstract:
XML data consists of a very flexible tree-structure which makes it difficult to support the storing and retrieving of XML data. The node numbering scheme is one of the most popular approaches to store XML in relational databases. Together with the node numbering storage scheme, structural joins can be used to efficiently process the hierarchical relationships in XML. However, in order to process a tree-structured XPath query containing several hierarchical relationships and conditional sentences on XML data, many structural joins need to be carried out, which results in a high query execution cost. This paper introduces mechanisms to reduce the XPath queries including branch nodes into a much more efficient form with less numbers of structural joins. A two step approach is proposed. The first step merges duplicate nodes in the tree-structured query and the second step divides the query into sub-queries, shortens the paths and then merges the sub-queries back together. The proposed approach can highly contribute to the efficient execution of XML queries. Experimental results show that the proposed scheme can reduce the query execution cost by up to an order of magnitude of the original execution cost.Keywords: XML, Xpath, tree-structured query, query reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548133 Sedimentation and its Challenges for Operation and Maintenance of Hydraulic Structures using SHARC Software- A Case Study of Eastern Intake in Dez Diversion Dam in Iran
Authors: M.R. Mansoujian, N. Hedayat, M. Mashal, H, Kiamanesh
Abstract:
Analytical investigation of the sedimentation processes in the river engineering and hydraulic structures is of vital importance as this can affect water supply for the cultivating lands in the command area. The reason being that gradual sediment formation behind the reservoir can reduce the nominal capacity of these dams. The aim of the present paper is to analytically investigate sedimentation process along the river course and behind the storage reservoirs in general and the Eastern Intake of the Dez Diversion weir in particular using the SHARC software. Results of the model indicated the water level at 115.97m whereas the real time measurement from the river cross section was 115.98 m which suggests a significantly close relation between them. The average transported sediment load in the river was measured at 0.25mm , from which it can be concluded that nearly 100% of the suspended loads in river are moving which suggests no sediment settling but indicates that almost all sediment loads enters into the intake. It was further showed the average sediment diameter entering the intake to be 0.293 mm which in turn suggests that about 85% of suspended sediments in the river entre the intake. Comparison of the results from the SHARC model with those obtained form the SSIIM software suggests quite similar outputs but distinguishing the SHARC model as more appropriate for the analysis of simpler problems than other model.Keywords: SHARC, Eastern Intake, Dez Diversion Weir.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595132 Satellite Sensing for Evaluation of an Irrigation System in Cotton - Wheat Zone
Authors: Sadia Iqbal, Faheem Iqbal, Furqan Iqbal
Abstract:
Efficient utilization of existing water is a pressing need for Pakistan. Due to rising population, reduction in present storage capacity and poor delivery efficiency of 30 to 40% from canal. A study to evaluate an irrigation system in the cotton-wheat zone of Pakistan, after the watercourse lining was conducted. The study is made on the basis of cropping pattern and salinity to evaluate the system. This study employed an index-based approach of using Geographic information system with field data. The satellite images of different years were use to examine the effective area. Several combinations of the ratio of signals received in different spectral bands were used for development of this index. Near Infrared and Thermal IR spectral bands proved to be most effective as this combination helped easy detection of salt affected area and cropping pattern of the study area. Result showed that 9.97% area under salinity in 1992, 9.17% in 2000 and it left 2.29% in year 2005. Similarly in 1992, 45% area is under vegetation it improves to 56% and 65% in 2000 and 2005 respectively. On the basis of these results evaluation is done 30% performance is increase after the watercourse improvement.Keywords: Salinity, remote sensing index, salinity index, cropping pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1677131 Numerical Study of Cyclic Behavior of Shallow Foundations on Sand Reinforced with Geogrid and Grid-Anchor
Authors: Alireza Hajiani Boushehrian, Nader Hataf, Arsalan Ghahramani
Abstract:
When the foundations of structures under cyclic loading with amplitudes less than their permissible load, the concern exists often for the amount of uniform and non-uniform settlement of such structures. Storage tank foundations with numerous filling and discharging and railways ballast course under repeating transportation loads are examples of such conditions. This paper deals with the effects of using the new generation of reinforcements, Grid-Anchor, for the purpose of reducing the permanent settlement of these foundations under the influence of different proportions of the ultimate load. Other items such as the type and the number of reinforcements as well as the number of loading cycles are studied numerically. Numerical models were made using the Plaxis3D Tunnel finite element code. The results show that by using gridanchor and increasing the number of their layers in the same proportion as that of the cyclic load being applied, the amount of permanent settlement decreases up to 42% relative to unreinforced condition depends on the number of reinforcement layers and percent of applied load and the number of loading cycles to reach a constant value of dimensionless settlement decreases up to 20% relative to unreinforced condition.Keywords: Shallow foundation, Reinforced soil, Cyclic loading, Grid-Anchor, Numerical analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2534130 Kinematic Analysis and Software Development of a Seven Degree of Freedom Inspection Robot
Authors: G. Shanmugasundar, R. Sivaramakrishnan, S. Venugopal
Abstract:
Robots are booming as an essential substituent in the field of inspection. In hazardous environments like nuclear waste disposal, robots are really a necessitate one. In a view to meet such demands, this paper presents the seven degree of freedom articulated inspection robot. To design such a robot the kinematic analysis of seven Degree of freedom robot which can inspect the hazardous nuclear waste storage tanks is done. The effective utilization of universal joints for arms and screw jack mechanisms at the base gives the higher order of degree of freedom to the newly designed robot. The analytical method of deriving the manipulator forward as well as inverse kinematics is explained elaborately using the Denavit-Hartenberg Approach for the purpose of calculating the robot joints, links and end-effector parameters. The comparison of the geometric and the analytical approach is stated. The self-developed kinematic model gives the accurate positions of the end effector. The Graphical User Interface (GUI) is developed in Visual Basic language for the manipulation of kinematic results easily. This software gives the expected position of the end-effector accurately at short time compared to manual manipulations.
Keywords: Robot kinematics, screw jack mechanisms, Denavit-Hartenberg approach, universal joints.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2913129 A Study on Flammability of Bio Oil Combustible Vapour Mixtures
Authors: Mohanad El-Harbawi, Nurul Amirah Hanim Bt. Umar, Norizan Ali, Yoshimitsu Uemura
Abstract:
Study of fire and explosion is very important mainly in oil and gas industries due to several accidents which have been reported in the past and present. In this work, we have investigated the flammability of bio oil vapour mixtures. This mixture may contribute to fire during the storage and transportation process. Bio oil sample derived from Palm Kernell shell was analysed using Gas Chromatography Mass Spectrometry (GC-MS) to examine the composition of the sample. Mole fractions of 12 selected components in the liquid phase were obtained from the GC-FID data and used to calculate mole fractions of components in the gas phase via modified Raoult-s law. Lower Flammability Limits (LFLs) and Upper Flammability Limits (UFLs) for individual components were obtained from published literature. However, stoichiometric concentration method was used to calculate the flammability limits of some components which their flammability limit values are not available in the literature. The LFL and UFL values for the mixture were calculated using the Le Chatelier equation. The LFLmix and UFLmix values were used to construct a flammability diagram and subsequently used to determine the flammability of the mixture. The findings of this study can be used to propose suitable inherently safer method to prevent the flammable mixture from occurring and to minimizing the loss of properties, business, and life due to fire accidents in bio oil productions.Keywords: Gas chromatography, compositions, lower and upper flammability limits (LFL & UFL), flammability diagram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3428128 Improvement of Soft Clay Using Floating Cement Dust-Lime Columns
Authors: Adel Belal, Sameh Aboelsoud, Mohy Elmashad, Mohammed Abdelmonem
Abstract:
The two main criteria that control the design and performance of footings are bearing capacity and settlement of soil. In soft soils, the construction of buildings, storage tanks, warehouse, etc. on weak soils usually involves excessive settlement problems. To solve bearing capacity or reduce settlement problems, soil improvement may be considered by using different techniques, including encased cement dust–lime columns. The proposed research studies the effect of adding floating encased cement dust and lime mix columns to soft clay on the clay-bearing capacity. Four experimental tests were carried out. Columns diameters of 3.0 cm, 4.0 cm, and 5.0 cm and columns length of 60% of the clay layer thickness were used. Numerical model was constructed and verified using commercial finite element package (PLAXIS 2D, V8.5). The verified model was used to study the effect of distributing columns around the footing at different distances. The study showed that the floating cement dust lime columns enhanced the clay-bearing capacity with 262%. The numerical model showed that the columns around the footing have a limit effect on the clay improvement.
Keywords: Bearing capacity, cement dust – lime columns, ground improvement, soft clay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1114127 A Modified Run Length Coding Technique for Test Data Compression Based on Multi-Level Selective Huffman Coding
Authors: C. Kalamani, K. Paramasivam
Abstract:
Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques.
Keywords: Modified run length coding, multilevel selective Huffman coding, built-in-self-test modified selective Huffman coding, automatic test equipment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1273126 Application of Medium High Hydrostatic Pressure in Preserving Textural Quality and Safety of Pineapple Compote
Authors: Nazim Uddin, Yohiko Nakaura, Kazutaka Yamamoto
Abstract:
Compote (fruit in syrup) of pineapple (Ananas comosus L. Merrill) is expected to have a high market potential as one of convenient ready-to-eat (RTE) foods worldwide. High hydrostatic pressure (HHP) in combination with low temperature (LT) was applied to the processing of pineapple compote as well as medium HHP (MHHP) in combination with medium-high temperature (MHT) since both processes can enhance liquid impregnation and inactivate microbes. MHHP+MHT (55 or 65 °C) process, as well as the HHP+LT process, has successfully inactivated the microbes in the compote to a non-detectable level. Although the compotes processed by MHHP+MHT or HHP+LT have lost the fresh texture as in a similar manner as those processed solely by heat, it was indicated that the texture degradations by heat were suppressed under MHHP. Degassing process reduced the hardness, while calcium (Ca) contributed to be retained hardness in MHT and MHHP+MHT processes. Electrical impedance measurement supported the damage due to degassing and heat. The color, Brix, and appearance were not affected by the processing methods significantly. MHHP+MHT and HHP+LT processes may be applicable to produce high-quality, safe RTE pineapple compotes. Further studies on the optimization of packaging and storage condition will be indispensable for commercialization.
Keywords: Compote of pineapple, ready-to-eat, medium high hydrostatic pressure, postharvest loss, and texture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806125 Government (Big) Data Ecosystem: Definition, Classification of Actors, and Their Roles
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Organizations, including governments, generate (big) data that are high in volume, velocity, veracity, and come from a variety of sources. Public Administrations are using (big) data, implementing base registries, and enforcing data sharing within the entire government to deliver (big) data related integrated services, provision of insights to users, and for good governance. Government (Big) data ecosystem actors represent distinct entities that provide data, consume data, manipulate data to offer paid services, and extend data services like data storage, hosting services to other actors. In this research work, we perform a systematic literature review. The key objectives of this paper are to propose a robust definition of government (big) data ecosystem and a classification of government (big) data ecosystem actors and their roles. We showcase a graphical view of actors, roles, and their relationship in the government (big) data ecosystem. We also discuss our research findings. We did not find too much published research articles about the government (big) data ecosystem, including its definition and classification of actors and their roles. Therefore, we lent ideas for the government (big) data ecosystem from numerous areas that include scientific research data, humanitarian data, open government data, industry data, in the literature.
Keywords: Big data, big data ecosystem, classification of big data actors, big data actors roles, definition of government (big) data ecosystem, data-driven government, eGovernment, gaps in data ecosystems, government (big) data, public administration, systematic literature review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2141124 Limestone Briquette Production and Characterization
Authors: André C. Silva, Mariana R. Barros, Elenice M. S. Silva, Douglas. Y. Marinho, Diego F. Lopes, Débora N. Sousa, Raphael S. Tomáz
Abstract:
Modern agriculture requires productivity, efficiency and quality. Therefore, there is need for agricultural limestone implementation that provides adequate amounts of calcium and magnesium carbonates in order to correct soil acidity. During the limestone process, fine particles (with average size under 400#) are generated. These particles do not have economic value in agricultural and metallurgical sectors due their size. When limestone is used for agriculture purposes, these fine particles can be easily transported by wind generated air pollution. Therefore, briquetting, a mineral processing technique, was used to mitigate this problem resulting in an agglomerated product suitable for agriculture use. Briquetting uses compressive pressure to agglomerate fine particles. It can be aided by agglutination agents, allowing adjustments in shape, size and mechanical parameters of the mass. Briquettes can generate extra profits for mineral industry, presenting as a distinct product for agriculture, and can reduce the environmental liabilities of the fine particles storage or disposition. The produced limestone briquettes were subjected to shatter and water action resistance tests. The results show that after six minutes completely submerged in water, the briquettes where fully diluted, a highly favorable result considering its use for soil acidity correction.
Keywords: Agglomeration, briquetting, limestone, agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597123 Waste Management in a Hot Laboratory of Japan Atomic Energy Agency – 3: Volume Reduction and Stabilization of Solid Waste
Authors: Masaumi Nakahara, Sou Watanabe, Hiromichi Ogi, Atsuhiro Shibata, Kazunori Nomura
Abstract:
In the Japan Atomic Energy Agency, three types of experimental research, advanced reactor fuel reprocessing, radioactive waste disposal, and nuclear fuel cycle technology, have been carried out at the Chemical Processing Facility. The facility has generated high level radioactive liquid and solid wastes in hot cells. The high level radioactive solid waste is divided into three main categories, a flammable waste, a non-flammable waste, and a solid reagent waste. A plastic product is categorized into the flammable waste and molten with a heating mantle. The non-flammable waste is cut with a band saw machine for reducing the volume. Among the solid reagent waste, a used adsorbent after the experiments is heated, and an extractant is decomposed for its stabilization. All high level radioactive solid wastes in the hot cells are packed in a high level radioactive solid waste can. The high level radioactive solid waste can is transported to the 2nd High Active Solid Waste Storage in the Tokai Reprocessing Plant in the Japan Atomic Energy Agency.
Keywords: High level radioactive solid waste, advanced reactor fuel reprocessing, radioactive waste disposal, nuclear fuel cycle technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918122 Optimizing Forecasting for Indonesia's Coal and Palm Oil Exports: A Comparative Analysis of ARIMA, ANN, and LSTM Methods
Authors: Mochammad Dewo, Sumarsono Sudarto
Abstract:
The Exponential Triple Smoothing Algorithm approach nowadays, which is used to anticipate the export value of Indonesia's two major commodities, coal and palm oil, has a Mean Percentage Absolute Error (MAPE) value of 30-50%, which may be considered as a "reasonable" forecasting mistake. Forecasting errors of more than 30% shall have a domino effect on industrial output, as extra production adds to raw material, manufacturing and storage expenses. Whereas, reaching an "excellent" classification with an error value of less than 10% will provide new investors and exporters with confidence in the commercial development of related sectors. Industrial growth will bring out a positive impact on economic development. It can be applied for other commodities if the forecast error is less than 10%. The purpose of this project is to create a forecasting technique that can produce precise forecasting results with an error of less than 10%. This research analyzes forecasting methods such as ARIMA (Autoregressive Integrated Moving Average), ANN (Artificial Neural Network) and LSTM (Long-Short Term Memory). By providing a MAPE of 1%, this study reveals that ANN is the most successful strategy for forecasting coal and palm oil commodities in Indonesia.
Keywords: ANN, Artificial Neural Network, ARIMA, Autoregressive Integrated Moving Average, export value, forecast, LSTM, Long Short Term Memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 223121 Experimental Characterization of the Thermal Behavior of a Sawdust Mortar
Authors: F. Taouche-Kheloui, O. Fedaoui-Akmoussi, K. Ait tahar, Li. Alex
Abstract:
Currently, the reduction of energy consumption, through the use of abundant and recyclable natural materials, for better thermal insulation represents an important area of research. To this end, the use of bio-sourced materials has been identified as one of the green sectors with a very high economic development potential for the future. Because of its role in reducing the consumption of fossil-based raw materials, it contributes significantly to the storage of atmospheric carbon, limits greenhouse gas emissions and creates new economic opportunities. This study constitutes a contribution to the elaboration and the experimental characterization of the thermal behavior of a sawdust-reduced mortar matrix. We have taken into account the influence of the size of the grain fibers of sawdust, hence the use of three different ranges and also different percentage in the different confections. The intended practical application consists of producing a light weight compound at a lower cost to ensure a better thermal and acoustic behavior compared to that existing in the field, in addition to the desired resistances. Improving energy performance, while reducing greenhouse gas emissions from the building sector, is amongst the objectives to be achieved. The results are very encouraging and highlight the value of the proposed design of organic-source mortar panels which have specific mechanical properties acceptable for their use, low densities, lower cost of manufacture and labor, and above all a positive impact on the environment.
Keywords: Mortar, sawdust waste, thermal, experimental, analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593