Search results for: 2d and 3d data conversion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26322

Search results for: 2d and 3d data conversion

24852 Techno-Economic Analysis of Offshore Hybrid Energy Systems with Hydrogen Production

Authors: Anna Crivellari, Valerio Cozzani

Abstract:

Even though most of the electricity produced in the entire world still comes from fossil fuels, new policies are being implemented in order to promote a more sustainable use of energy sources. Offshore renewable resources have become increasingly attractive thanks to the huge entity of power potentially obtained. However, the intermittent nature of renewables often limits the capacity of the systems and creates mismatches between supply and demand. Hydrogen is foreseen to be a promising vector to store and transport large amounts of excess renewable power by using existing oil and gas infrastructure. In this work, an offshore hybrid energy system integrating wind energy conversion with hydrogen production was conceptually defined and applied to offshore gas platforms. A techno-economic analysis was performed by considering two different locations for the installation of the innovative power system, i.e., the North Sea and the Adriatic Sea. The water depth, the distance of the platform from the onshore gas grid, the hydrogen selling price and the green financial incentive were some of the main factors taken into account in the comparison. The results indicated that the use of well-defined indicators allows to capture specifically different cost and revenue features of the analyzed systems, as well as to evaluate their competitiveness in the actual and future energy market.

Keywords: cost analysis, energy efficiency assessment, hydrogen production, offshore wind energy

Procedia PDF Downloads 129
24851 Sardine Oil as a Source of Lipid in the Diet of Giant Freshwater Prawn (Macrobrachium rosenbergii)

Authors: A. T. Ramachandra Naik, H. Shivananda Murthy, H. n. Anjanayappa

Abstract:

The freshwater prawn, Macrobrachium rosenbergii is a more popular crustacean cultured widely in monoculture system in India. It has got high nutritional value in the human diet. Hence, understanding its enzymatic and body composition is important in order to judge its flesh quality. Fish oil specially derived from Indian oil sardine is a good source of highly unsaturated fatty acid and lipid source in fish/prawn diet. A 35% crude protein diet with graded levels of Sardine oil as a source of fat was incorporated at four levels viz, 2.07, 4.07, 6.07 and 8.07% maintaining a total lipid level of feed at 8.11, 10.24, 12.28 and 14.33% respectively. Diet without sardine oil (6.05% total lipid) was served as basal treatment. The giant freshwater prawn, Macrobrachium rosenbergii was used as test animal and the experiment was lost for 112 days. Significantly, higher gain in weight of prawn was recorded in the treatment with 6.07% sardine oil incorporation followed by higher specific growth rate, food conversion rate and protein efficiency ratio. The 8.07% sardine oil diet produced the highest RNA: DNA ratio in the prawn muscle. Digestive enzyme analyses in the digestive tract and mid-gut gland showed the greatest activity in prawns fed the 8.07% diet.

Keywords: digestive enzyme, fish diet, Macrobrachium rosenbergii, sardine oil

Procedia PDF Downloads 334
24850 Sparse Coding Based Classification of Electrocardiography Signals Using Data-Driven Complete Dictionary Learning

Authors: Fuad Noman, Sh-Hussain Salleh, Chee-Ming Ting, Hadri Hussain, Syed Rasul

Abstract:

In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.

Keywords: electrocardiogram, dictionary learning, sparse coding, classification

Procedia PDF Downloads 389
24849 Electrochemical Reduction of Carbon-dioxide Using Metal Nano-particles Supported on Nano-Materials

Authors: Mulatu Kassie Birhanu

Abstract:

Electrochemical reduction of CO₂ is an emerging and current issue for its conversion in to valuable product upon minimization of its atmospheric level for contribution of maintaining within the range of permissible limit. Among plenty of electro-catalysts gold and copper are efficient and effective catalysts, which are synthesized and applicable for this research work. The two metal catalysts were prepared in inert environment with different compositions through co-reduction process from their corresponding precursors and then by adding multi-walled carbon nano-tube as a supporter and enhanced the conductivity. The catalytic performance of CO₂ reduction for each composition was performed and resulted an outstanding catalytic activity with generation of high current density (70 mA/cm² at 0.91V vs. RHE) and relatively small onset potential. The catalytic performance, compositions, morphologies, structure and geometric arrangements were evaluated by electrochemical analysis (LSV, impedance, chronoamperometry & tafel plot), EDS, SEM and XAS respectively. The composite metals showed better selectivity of products and faradaic efficiencies due to the synergetic effects of the combined nano-particles in addition to the impact of grain size in reduction of CO₂. Carbon monoxide, hydrogen, formate and ethanol are the reduction products, which are detected and quantifiable by chromatographic techniques considering their physical state of each product.

Keywords: carbondioxide, faradaic efficiency, electrocatalyst, current density

Procedia PDF Downloads 60
24848 Heterodimetallic Ferrocenyl Dithiophosphonate Complexes of Nickel(II), Zinc(II) and Cadmium(II) as High Efficiency Co-Sensitizers in Dye-Sensitized Solar Cells

Authors: Tomilola J. Ajayi, Moses Ollengo, Lukas le Roux, Michael N. Pillay, Richard J. Staples, Shannon M. Biros Werner E. van Zyl

Abstract:

The formation, characterization, and dye-sensitized solar cell application of nickel(II), zinc(II) and cadmium(II) ferrocenyl dithiophosphonate complexes were investigated. The multidentate monoanionic ligand [S₂PFc(OH)]¯ (L1) was synthesized from the reaction between ferrocenyl Lawesson’s reagent, [FcP(=S)μ-S]₂ (FcLR), (Fc = ferrocenyl) and water. Ligand L1 could potentially coordinate to metal centers through the S, S’ and O donor atoms. The reaction between metal salt precursors and L1 produced a Ni(II) complex of the type [Ni{S₂P(Fc)(OH)}₂] (1) (molar ratio 1:2), a tetranickel (II) complex of the type [Ni₂{S₂OP(Fc)}₂]₂ (2) (molar ratio (1:1), as well as a Zn(II) complex [Zn{S₂P(Fc)(OH)}₂]₂ (3), and a Cd(II) complex [Cd{S₂P(Fc)(OH)}₂]₂ (4). Complexes 1-4 were characterized by 1H and 31P NMR and FT-IR, and complexes 1 and 2 were additionally analysed by X-Ray crystallography. After co-sensitization, the DSSCs were characterized using UV-Vis, cyclic voltammetry, electrochemical impedance spectroscopy, and photovoltaic measurements (I-V curves). Overall finding shows that co-sensitization of our compounds with ruthenium dye N719 resulted in a better overall solar conversion efficiency than only pure N719 dye under the same experimental conditions. In conclusion, we report the first examples of dye-sensitized solar cells (DSSCs) co-sensitized with ferrocenyl dithiophosphonate complexes.

Keywords: dithiophosphonate, dye sensitized solar cell, co-sensitization, solar efficiency

Procedia PDF Downloads 154
24847 A Deletion-Cost Based Fast Compression Algorithm for Linear Vector Data

Authors: Qiuxiao Chen, Yan Hou, Ning Wu

Abstract:

As there are deficiencies of the classic Douglas-Peucker Algorithm (DPA), such as high risks of deleting key nodes by mistake, high complexity, time consumption and relatively slow execution speed, a new Deletion-Cost Based Compression Algorithm (DCA) for linear vector data was proposed. For each curve — the basic element of linear vector data, all the deletion costs of its middle nodes were calculated, and the minimum deletion cost was compared with the pre-defined threshold. If the former was greater than or equal to the latter, all remaining nodes were reserved and the curve’s compression process was finished. Otherwise, the node with the minimal deletion cost was deleted, its two neighbors' deletion costs were updated, and the same loop on the compressed curve was repeated till the termination. By several comparative experiments using different types of linear vector data, the comparison between DPA and DCA was performed from the aspects of compression quality and computing efficiency. Experiment results showed that DCA outperformed DPA in compression accuracy and execution efficiency as well.

Keywords: Douglas-Peucker algorithm, linear vector data, compression, deletion cost

Procedia PDF Downloads 254
24846 Multimedia Container for Autonomous Car

Authors: Janusz Bobulski, Mariusz Kubanek

Abstract:

The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars.

Keywords: an autonomous car, image processing, lidar, obstacle detection

Procedia PDF Downloads 229
24845 Mobile Crowdsensing Scheme by Predicting Vehicle Mobility Using Deep Learning Algorithm

Authors: Monojit Manna, Arpan Adhikary

Abstract:

In Mobile cloud sensing across the globe, an emerging paradigm is selected by the user to compute sensing tasks. In urban cities current days, Mobile vehicles are adapted to perform the task of data sensing and data collection for universality and mobility. In this work, we focused on the optimality and mobile nodes that can be selected in order to collect the maximum amount of data from urban areas and fulfill the required data in the future period within a couple of minutes. We map out the requirement of the vehicle to configure the maximum data optimization problem and budget. The Application implementation is basically set up to generalize a realistic online platform in which real-time vehicles are moving apparently in a continuous manner. The data center has the authority to select a set of vehicles immediately. A deep learning-based scheme with the help of mobile vehicles (DLMV) will be proposed to collect sensing data from the urban environment. From the future time perspective, this work proposed a deep learning-based offline algorithm to predict mobility. Therefore, we proposed a greedy approach applying an online algorithm step into a subset of vehicles for an NP-complete problem with a limited budget. Real dataset experimental extensive evaluations are conducted for the real mobility dataset in Rome. The result of the experiment not only fulfills the efficiency of our proposed solution but also proves the validity of DLMV and improves the quantity of collecting the sensing data compared with other algorithms.

Keywords: mobile crowdsensing, deep learning, vehicle recruitment, sensing coverage, data collection

Procedia PDF Downloads 81
24844 Achieving the Elevated Nitritation for Autotrophic/Heterotrophic Denitritation in CSTR by Treating Livestock Wastewater

Authors: Hammad Khan, Wookeun Bae

Abstract:

The objective of this study was to achieve, optimize and control the highly loaded and efficient nitrite production having suitability for autotrophic and heterotrophic denitritation. A lab scale CSTR for partial and full nitritation was operated to treat the livestock manure digester liquor having an ammonium concentration of ~2000 mg-NH4+-N/L and biodegradable contents of ~0.8 g-COD/L. The experiments were performed at 30°C, pH: 8.0, DO: 1.5 mg/L and SRT ranging from 7-20 days. After 125 days operation, >95% nitrite buildup having the ammonium loading rate of ~3.2 kg-NH4+-N/m3-day was seen with almost complete ammonium conversion. On increasing the loading rate further (i-e, from 3.2-6.2 kg-NH4+-N/m3-day), stability of the system remained unaffected. On decreasing the pH from 8 to 7.5 and further 7.2, removal rate can be easily controlled as 95%, 75% and even 50%. Results demonstrated that nitritation stability and desired removal rates are controlled by a balance of simultaneous inhibition by FA & FNA, pH affect and DO limitation. These parameters proved to be effective even to produce an appropriate influent for anammox. In addition, a mathematical model, identified through the occurring biological reactions, is proposed to optimize the full and partial nitritation process. The proposed model present relationship between pH, ammonium and produced nitrite for full and partial nitritation under the varying concentrations of DO, and simultaneous inhibition by FA and FNA.

Keywords: stable nitritation, high loading, autrophic denitritation, hetrotrophic denitritation

Procedia PDF Downloads 278
24843 Nano-Particle of π-Conjugated Polymer for Near-Infrared Bio-Imaging

Authors: Hiroyuki Aoki

Abstract:

Molecular imaging has attracted much attention recently, which visualizes biological molecules, cells, tissue, and so on. Among various in vivo imaging techniques, the fluorescence imaging method has been widely employed as a useful modality for small animals in pre-clinical researches. However, the higher signal intensity is needed for highly sensitive in vivo imaging. The objective of the current study is the development of a fluorescent imaging agent with high brightness for the tumor imaging of a mouse. The strategy to enhance the fluorescence signal of a bio-imaging agent is the increase of the absorption of the excitation light and the fluorescence conversion efficiency. We developed a nano-particle fluorescence imaging agent consisting of a π-conjugated polymer emitting a fluorescence signal in a near infrared region. A large absorption coefficient and high emission intensity at a near infrared optical window for biological tissue enabled highly sensitive in vivo imaging with a tumor-targeting ability by an EPR (enhanced permeation and retention) effect. The signal intensity from the π-conjugated fluorescence imaging agent is larger by two orders of magnitude compared to a quantum dot, which has been known as the brightest imaging agent. The π-conjugated polymer nano-particle would be a promising candidate in the in vivo imaging of small animals.

Keywords: fluorescence, conjugated polymer, in vivo imaging, nano-particle, near-infrared

Procedia PDF Downloads 484
24842 A Biometric Template Security Approach to Fingerprints Based on Polynomial Transformations

Authors: Ramon Santana

Abstract:

The use of biometric identifiers in the field of information security, access control to resources, authentication in ATMs and banking among others, are of great concern because of the safety of biometric data. In the general architecture of a biometric system have been detected eight vulnerabilities, six of them allow obtaining minutiae template in plain text. The main consequence of obtaining minutia templates is the loss of biometric identifier for life. To mitigate these vulnerabilities several models to protect minutiae templates have been proposed. Several vulnerabilities in the cryptographic security of these models allow to obtain biometric data in plain text. In order to increase the cryptographic security and ease of reversibility, a minutiae templates protection model is proposed. The model aims to make the cryptographic protection and facilitate the reversibility of data using two levels of security. The first level of security is the data transformation level. In this level generates invariant data to rotation and translation, further transformation is irreversible. The second level of security is the evaluation level, where the encryption key is generated and data is evaluated using a defined evaluation function. The model is aimed at mitigating known vulnerabilities of the proposed models, basing its security on the impossibility of the polynomial reconstruction.

Keywords: fingerprint, template protection, bio-cryptography, minutiae protection

Procedia PDF Downloads 173
24841 The Photocatalytic Approach for the Conversion of Polluted Seawater CO₂ into Renewable Source of Energy

Authors: Yasar N. Kavil, Yasser A. Shaban, Radwan K. Al Farawati, Mohamed I. Orif, Shahed U. M. Khanc

Abstract:

Photocatalytic way of reduction of CO₂ in polluted seawater into chemical fuel, methanol, was successfully gained over Cu/C-co-doped TiO₂ nanoparticles under UV and natural sunlight. A homemade stirred batch annular reactor was used to carry out the photocatalytic reduction experiments. Photocatalysts with various Cu loadings (0, 0.5, 1, 3, 5 and 7 wt.%) were synthesized by the sol-gel procedure and were characterized by XRD, SEM, UV–Vis, FTIR, and XPS. The photocatalytic production of methanol was promoted by the co-doping with C and Cu into TiO₂. This improvement was attributed to the modification of bandgap energy and the hindrance of the charges recombination. The polluted seawater showing the yield depended on its background hydrographic parameters. We assessed two types of polluted seawater system, the observed yield was 2910 and 990 µmol g⁻¹ after 5 h of illumination under UV and natural sunlight respectively in system 1 and the corresponding yield in system 2 was 2250 and 910 µmol g⁻¹ after 5 h of illumination. The production of methanol in the case of oxygen-depleted water was low, this is mainly attributed to the competition of methanogenic bacteria over methanol production. The results indicated that the methanol yield produced by Cu-C/TiO₂ was much higher than those of carbon-modified titanium oxide (C/TiO₂) and Degussa (P25-TiO₂). Under the current experimental condition, the optimum loading was achieved by the doping of 3 wt % of Cu. The highest methanol yield was obtained over 1 g L-1 of 3wt% Cu/C-TiO₂.

Keywords: CO₂ photoreduction, copper, Cu/C-co-doped TiO₂, methanol, seawater

Procedia PDF Downloads 280
24840 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 231
24839 Improving Digital Data Security Awareness among Teacher Candidates with Digital Storytelling Technique

Authors: Veysel Çelik, Aynur Aker, Ebru Güç

Abstract:

Developments in information and communication technologies have increased both the speed of producing information and the speed of accessing new information. Accordingly, the daily lives of individuals have started to change. New concepts such as e-mail, e-government, e-school, e-signature have emerged. For this reason, prospective teachers who will be future teachers or school administrators are expected to have a high awareness of digital data security. The aim of this study is to reveal the effect of the digital storytelling technique on the data security awareness of pre-service teachers of computer and instructional technology education departments. For this purpose, participants were selected based on the principle of volunteering among third-grade students studying at the Computer and Instructional Technologies Department of the Faculty of Education at Siirt University. In the research, the pretest/posttest half experimental research model, one of the experimental research models, was used. In this framework, a 6-week lesson plan on digital data security awareness was prepared in accordance with the digital narration technique. Students in the experimental group formed groups of 3-6 people among themselves. The groups were asked to prepare short videos or animations for digital data security awareness. The completed videos were watched and evaluated together with prospective teachers during the evaluation process, which lasted approximately 2 hours. In the research, both quantitative and qualitative data collection tools were used by using the digital data security awareness scale and the semi-structured interview form consisting of open-ended questions developed by the researchers. According to the data obtained, it was seen that the digital storytelling technique was effective in creating data security awareness and creating permanent behavior changes for computer and instructional technology students.

Keywords: digital storytelling, self-regulation, digital data security, teacher candidates, self-efficacy

Procedia PDF Downloads 130
24838 Geomorphology Evidence of Climate Change in Gavkhouni Lagoon, South East Isfahan, Iran

Authors: Manijeh Ghahroudi Tali, Ladan Khedri Gharibvand

Abstract:

Gavkhouni lagoon, in the South East of Isfahan (Iran), is one of the pluvial lakes and legacy of Quaternary era which has emerged during periods with more precipitation and less evaporation. Climate change, lack of water resources and dried freshwater of Zayandehrood resulted in increased entropy and activated a dynamic which in turn is converted to Playa. The morphometry of 61 polygonal clay microforms in wet zone soil, 52 polygonal clay microforms in pediplain zone soil and 63 microforms in sulfate soil, is evaluated by fractal model. After calculating the microforms’ area–perimeter fractal dimension, their turbulence level was analyzed. Fractal dimensions (DAP) obtained from the microforms’ analysis of pediplain zone, wet zone, and sulfate soils are 1/21-1/39, 1/27-1/44 and 1/29-1/41, respectively, which is indicative of turbulence in these zones. Logarithmic graph drawn for each region also shows that there is a linear relationship between logarithm of the microforms’ area and perimeter so that correlation coefficient (R2) obtained for wet zone is larger than 0.96, for pediplain zone is larger than 0.99 and for sulfated zone is 0.9. Increased turbulence in this region suggests morphological transformation of the system and lagoon’s conversion to a new ecosystem which can be accompanied with serious risks.

Keywords: fractal, Gavkhouni, microform, Iran

Procedia PDF Downloads 274
24837 Revealing the Sustainable Development Mechanism of Guilin Tourism Based on Driving Force/Pressure/State/Impact/Response Framework

Authors: Xiujing Chen, Thammananya Sakcharoen, Wilailuk Niyommaneerat

Abstract:

China's tourism industry is in a state of shock and recovery, although COVID-19 has brought great impact and challenges to the tourism industry. The theory of sustainable development originates from the contradiction of increasing awareness of environmental protection and the pursuit of economic interests. The sustainable development of tourism should consider social, economic, and environmental factors and develop tourism in a planned and targeted way from the overall situation. Guilin is one of the popular tourist cities in China. However, there exist several problems in Guilin tourism, such as low quality of scenic spot construction and low efficiency of tourism resource development. Due to its unwell-managed, Guilin's tourism industry is facing problems such as supply and demand crowding pressure for tourists. According to the data from 2009 to 2019, there is a change in the degree of sustainable development of Guilin tourism. This research aimed to evaluate the sustainable development state of Guilin tourism using the DPSIR (driving force/pressure/state/impact/response) framework and to provide suggestions and recommendations for sustainable development in Guilin. An improved TOPSIS (technology for order preference by similarity to an ideal solution) model based on the entropy weights relationship is applied to the quantitative analysis and to analyze the mechanisms of sustainable development of tourism in Guilin. The DPSIR framework organizes indicators into sub-five categories: of which twenty-eight indicators related to sustainable aspects of Guilin tourism are classified. The study analyzed and summarized the economic, social, and ecological effects generated by tourism development in Guilin from 2009-2019. The results show that the conversion rate of tourism development in Guilin into regional economic benefits is more efficient than that into social benefits. Thus, tourism development is an important driving force of Guilin's economic growth. In addition, the study also analyzed the static weights of 28 relevant indicators of sustainable development of tourism in Guilin and ranked them from largest to smallest. Then it was found that the economic and social factors related to tourism revenue occupy the highest weight, which means that the economic and social development of Guilin can influence the sustainable development of Guilin tourism to a greater extent. Therefore, there is a two-way causal relationship between tourism development and economic growth in Guilin. At the same time, ecological development-related indicators also have relatively large weights, so ecological and environmental resources also have a great influence on the sustainable development of Guilin tourism.

Keywords: DPSIR framework, entropy weights analysis, sustainable development of tourism, TOPSIS analysis

Procedia PDF Downloads 103
24836 Effect of Varying Levels of Concentrate Ration on the Performance of Nili-Ravi Buffalo Heifer Calves

Authors: Z. M. Iqbal, M. Abdullah, K. Javed, M. A. Jabbar, A. Haque, M. Saadullah, F. Shahzad

Abstract:

The current study was conducted to set the appropriate concentrate level for Nili-Ravi buffalo heifers. Twenty seven buffalo heifers were randomly divided into three different groups A, B and C having nine animals in each group. All the heifers were given free access to chopped green fodder and fresh water. In addition, heifers of group A, B and C were given concentrate at the rate of 0.5%, 1% and 1.5% of their body weight. The average daily dry matter intake was 2.69, 3.06 and 3.83 kg with average daily gain of 456.09, 398.56 and 515.87 gm in group A, B and C, respectively. The feed conversion ratio of heifers of these groups was 5.89, 7.74 and 7.52, respectively. There was non-significant (P>0.05) difference in the body measurements (height at wither, body length and heart girth), final body condition and scoring and blood serum (glucose, total protein and cholesterol) of heifers of all the three groups. The results of current study shows that there is non-significant (P>0.05) difference in the growth rate of Nili-Ravi heifers at varying levels of concentrate so, it is cost effective to raise 6-8 month calves by offering concentrate at the rate of 0.5% body weight along with free access of green fodder.

Keywords: concentrate level, buffalo heifer, body measurement, green fodder

Procedia PDF Downloads 427
24835 A Remote Sensing Approach to Calculate Population Using Roads Network Data in Lebanon

Authors: Kamel Allaw, Jocelyne Adjizian Gerard, Makram Chehayeb, Nada Badaro Saliba

Abstract:

In developing countries, such as Lebanon, the demographic data are hardly available due to the absence of the mechanization of population system. The aim of this study is to evaluate, using only remote sensing data, the correlations between the number of population and the characteristics of roads network (length of primary roads, length of secondary roads, total length of roads, density and percentage of roads and the number of intersections). In order to find the influence of the different factors on the demographic data, we studied the degree of correlation between each factor and the number of population. The results of this study have shown a strong correlation between the number of population and the density of roads and the number of intersections.

Keywords: population, road network, statistical correlations, remote sensing

Procedia PDF Downloads 168
24834 A Multicopy Strategy for Improved Security Wireless Sensor Network

Authors: Tuğçe Yücel

Abstract:

A Wireless Sensor Network(WSN) is a collection of sensor nodes which are deployed randomly in an area for surveillance. Efficient utilization of limited battery energy of sensors for increased network lifetime as well as data security are major design objectives for WSN. Moreover secure transmission of data sensed to a base station for further processing. Producing multiple copies of data packets and sending them on different paths is one of the strategies for this purpose, which leads to redundant energy consumption and hence reduced network lifetime. In this work we develop a restricted multi-copy multipath strategy where data move through ‘frequently’ or ‘heavily’ used sensors is copied by the sensor incident to such central nodes and sent on node-disjoint paths. We develop a mixed integer programing(MIP) model and heuristic approach present some preleminary test results.

Keywords: MIP, sensor, telecommunications, WSN

Procedia PDF Downloads 517
24833 Wikipedia World: A Computerized Process for Cultural Heritage Data Dissemination

Authors: L. Rajaonarivo, M. N. Bessagnet, C. Sallaberry, A. Le Parc Lacayrelle, L. Leveque

Abstract:

TCVPYR is a European FEDER (European Regional Development Fund) project which aims to promote tourism in the French Pyrenees region by leveraging its cultural heritage. It involves scientists from various domains (geographers, historians, anthropologists, computer scientists...). This paper presents a fully automated process to publish any dataset as Wikipedia articles as well as the corresponding linked information on Wikidata and Wikimedia Commons. We validate this process on a sample of geo-referenced cultural heritage data collected by TCVPYR researchers in different regions of the Pyrenees. The main result concerns the technological prerequisites, which are now in place. Moreover, we demonstrated that we can automatically publish cultural heritage data on Wikimedia.

Keywords: cultural heritage dissemination, digital humanities, open data, Wikimedia automated publishing

Procedia PDF Downloads 130
24832 Adaptive Decision Feedback Equalizer Utilizing Fixed-Step Error Signal for Multi-Gbps Serial Links

Authors: Alaa Abdullah Altaee

Abstract:

This paper presents an adaptive decision feedback equalizer (ADFE) for multi-Gbps serial links utilizing a fix-step error signal extracted from cross-points of received data symbols. The extracted signal is generated based on violation of received data symbols with minimum detection requirements at the clock and data recovery (CDR) stage. The iterations of the adaptation process search for the optimum feedback tap coefficients to maximize the data eye-opening and minimize the adaptation convergence time. The effectiveness of the proposed architecture is validated using the simulation results of a serial link designed in an IBM 130 nm 1.2V CMOS technology. The data link with variable channel lengths is analyzed using Spectre from Cadence Design Systems with BSIM4 device models.

Keywords: adaptive DFE, CMOS equalizer, error detection, serial links, timing jitter, wire-line communication

Procedia PDF Downloads 126
24831 Data-Driven Market Segmentation in Hospitality Using Unsupervised Machine Learning

Authors: Rik van Leeuwen, Ger Koole

Abstract:

Within hospitality, marketing departments use segmentation to create tailored strategies to ensure personalized marketing. This study provides a data-driven approach by segmenting guest profiles via hierarchical clustering based on an extensive set of features. The industry requires understandable outcomes that contribute to adaptability for marketing departments to make data-driven decisions and ultimately driving profit. A marketing department specified a business question that guides the unsupervised machine learning algorithm. Features of guests change over time; therefore, there is a probability that guests transition from one segment to another. The purpose of the study is to provide steps in the process from raw data to actionable insights, which serve as a guideline for how hospitality companies can adopt an algorithmic approach.

Keywords: hierarchical cluster analysis, hospitality, market segmentation

Procedia PDF Downloads 111
24830 Geographic Information System for Simulating Air Traffic By Applying Different Multi-Radar Positioning Techniques

Authors: Amara Rafik, Mostefa Belhadj Aissa

Abstract:

Radar data is one of the many data sources used by ATM Air Traffic Management systems. These data come from air navigation radar antennas. These radars intercept signals emitted by the various aircraft crossing the controlled airspace and calculate the position of these aircraft and retransmit their positions to the Air Traffic Management System. For greater reliability, these radars are positioned in such a way as to allow their coverage areas to overlap. An aircraft will therefore be detected by at least one of these radars. However, the position coordinates of the same aircraft and sent by these different radars are not necessarily identical. Therefore, the ATM system must calculate a single position (radar track) which will ultimately be sent to the control position and displayed on the air traffic controller's monitor. There are several techniques for calculating the radar track. Furthermore, the geographical nature of the problem requires the use of a Geographic Information System (GIS), i.e. a geographical database on the one hand and geographical processing. The objective of this work is to propose a GIS for traffic simulation which reconstructs the evolution over time of aircraft positions from a multi-source radar data set and by applying these different techniques.

Keywords: ATM, GIS, radar data, simulation

Procedia PDF Downloads 124
24829 Exploring Gaming-Learning Interaction in MMOG Using Data Mining Methods

Authors: Meng-Tzu Cheng, Louisa Rosenheck, Chen-Yen Lin, Eric Klopfer

Abstract:

The purpose of the research is to explore some of the ways in which gameplay data can be analyzed to yield results that feedback into the learning ecosystem. Back-end data for all users as they played an MMOG, The Radix Endeavor, was collected, and this study reports the analyses on a specific genetics quest by using the data mining techniques, including the decision tree method. In the study, different reasons for quest failure between participants who eventually succeeded and who never succeeded were revealed. Regarding the in-game tools use, trait examiner was a key tool in the quest completion process. Subsequently, the results of decision tree showed that a lack of trait examiner usage can be made up with additional Punnett square uses, displaying multiple pathways to success in this quest. The methods of analysis used in this study and the resulting usage patterns indicate some useful ways that gameplay data can provide insights in two main areas. The first is for game designers to know how players are interacting with and learning from their game. The second is for players themselves as well as their teachers to get information on how they are progressing through the game, and to provide help they may need based on strategies and misconceptions identified in the data.

Keywords: MMOG, decision tree, genetics, gaming-learning interaction

Procedia PDF Downloads 361
24828 From Two-Way to Multi-Way: A Comparative Study for Map-Reduce Join Algorithms

Authors: Marwa Hussien Mohamed, Mohamed Helmy Khafagy

Abstract:

Map-Reduce is a programming model which is widely used to extract valuable information from enormous volumes of data. Map-reduce designed to support heterogeneous datasets. Apache Hadoop map-reduce used extensively to uncover hidden pattern like data mining, SQL, etc. The most important operation for data analysis is joining operation. But, map-reduce framework does not directly support join algorithm. This paper explains and compares two-way and multi-way map-reduce join algorithms for map reduce also we implement MR join Algorithms and show the performance of each phase in MR join algorithms. Our experimental results show that map side join and map merge join in two-way join algorithms has the longest time according to preprocessing step sorting data and reduce side cascade join has the longest time at Multi-Way join algorithms.

Keywords: Hadoop, MapReduce, multi-way join, two-way join, Ubuntu

Procedia PDF Downloads 492
24827 Identification System for Grading Banana in Food Processing Industry

Authors: Ebenezer O. Olaniyi, Oyebade K. Oyedotun, Khashman Adnan

Abstract:

In the food industry high quality production is required within a limited time to meet up with the demand in the society. In this research work, we have developed a model which can be used to replace the human operator due to their low output in production and slow in making decisions as a result of an individual differences in deciding the defective and healthy banana. This model can perform the vision attributes of human operators in deciding if the banana is defective or healthy for food production based. This research work is divided into two phase, the first phase is the image processing where several image processing techniques such as colour conversion, edge detection, thresholding and morphological operation were employed to extract features for training and testing the network in the second phase. These features extracted in the first phase were used in the second phase; the classification system phase where the multilayer perceptron using backpropagation neural network was employed to train the network. After the network has learned and converges, the network was tested with feedforward neural network to determine the performance of the network. From this experiment, a recognition rate of 97% was obtained and the time taken for this experiment was limited which makes the system accurate for use in the food industry.

Keywords: banana, food processing, identification system, neural network

Procedia PDF Downloads 474
24826 An Approach for Ensuring Data Flow in Freight Delivery and Management Systems

Authors: Aurelija Burinskienė, Dalė Dzemydienė, Arūnas Miliauskas

Abstract:

This research aims at developing the approach for more effective freight delivery and transportation process management. The road congestions and the identification of causes are important, as well as the context information recognition and management. The measure of many parameters during the transportation period and proper control of driver work became the problem. The number of vehicles per time unit passing at a given time and point for drivers can be evaluated in some situations. The collection of data is mainly used to establish new trips. The flow of the data is more complex in urban areas. Herein, the movement of freight is reported in detail, including the information on street level. When traffic density is extremely high in congestion cases, and the traffic speed is incredibly low, data transmission reaches the peak. Different data sets are generated, which depend on the type of freight delivery network. There are three types of networks: long-distance delivery networks, last-mile delivery networks and mode-based delivery networks; the last one includes different modes, in particular, railways and other networks. When freight delivery is switched from one type of the above-stated network to another, more data could be included for reporting purposes and vice versa. In this case, a significant amount of these data is used for control operations, and the problem requires an integrated methodological approach. The paper presents an approach for providing e-services for drivers by including the assessment of the multi-component infrastructure needed for delivery of freights following the network type. The construction of such a methodology is required to evaluate data flow conditions and overloads, and to minimize the time gaps in data reporting. The results obtained show the possibilities of the proposing methodological approach to support the management and decision-making processes with functionality of incorporating networking specifics, by helping to minimize the overloads in data reporting.

Keywords: transportation networks, freight delivery, data flow, monitoring, e-services

Procedia PDF Downloads 132
24825 Regulating Issues concerning Data Protection in Cloud Computing: Developing a Saudi Approach

Authors: Jumana Majdi Qutub

Abstract:

Rationale: Cloud computing has rapidly developed the past few years. Because of the importance of providing protection for personal data used in cloud computing, the role of data protection in promoting trust and confidence in users’ data has become an important policy priority. This research examines key regulatory challenges rose by the growing use and importance of cloud computing with focusing on protection of individuals personal data. Methodology: Describing and analyzing governance challenges facing policymakers and industry in Saudi Arabia, with an account of anticipated governance responses. The aim of the research is to describe and define the regulatory challenges on cloud computing for policy making in Saudi Arabia and comparing it with potential complied issues rose in respect of transported data to EU member state. In addition, it discusses information privacy issues. Finally, the research proposes policy recommendation that would resolve concerns surrounds the privacy and effectiveness of clouds computing frameworks for data protection. Results: There are still no clear regulation in Saudi Arabia specialized in legalizing cloud computing and specialty regulations in transferring data internationally and locally. Decision makers need to review the applicable law in Saudi Arabia that protect information in cloud computing. This should be from an international and a local view in order to identify all requirements surrounding this area. It is important to educate cloud computing users about their information value and rights before putting it in the cloud to avoid further legal complications, such as making an educational program to prevent giving personal information to a bank employee. Therefore, with many kinds of cloud computing services, it is important to have it covered by the law in all aspects.

Keywords: cloud computing, cyber crime, data protection, privacy

Procedia PDF Downloads 263
24824 Multistage Data Envelopment Analysis Model for Malmquist Productivity Index Using Grey's System Theory to Evaluate Performance of Electric Power Supply Chain in Iran

Authors: Mesbaholdin Salami, Farzad Movahedi Sobhani, Mohammad Sadegh Ghazizadeh

Abstract:

Evaluation of organizational performance is among the most important measures that help organizations and entities continuously improve their efficiency. Organizations can use the existing data and results from the comparison of units under investigation to obtain an estimation of their performance. The Malmquist Productivity Index (MPI) is an important index in the evaluation of overall productivity, which considers technological developments and technical efficiency at the same time. This article proposed a model based on the multistage MPI, considering limited data (Grey’s theory). This model can evaluate the performance of units using limited and uncertain data in a multistage process. It was applied by the electricity market manager to Iran’s electric power supply chain (EPSC), which contains uncertain data, to evaluate the performance of its actors. Results from solving the model showed an improvement in the accuracy of future performance of the units under investigation, using the Grey’s system theory. This model can be used in all case studies, in which MPI is used and there are limited or uncertain data.

Keywords: Malmquist Index, Grey's Theory, CCR Model, network data envelopment analysis, Iran electricity power chain

Procedia PDF Downloads 170
24823 Data Mining Approach: Classification Model Evaluation

Authors: Lubabatu Sada Sodangi

Abstract:

The rapid growth in exchange and accessibility of information via the internet makes many organisations acquire data on their own operation. The aim of data mining is to analyse the different behaviour of a dataset using observation. Although, the subset of the dataset being analysed may not display all the behaviours and relationships of the entire data and, therefore, may not represent other parts that exist in the dataset. There is a range of techniques used in data mining to determine the hidden or unknown information in datasets. In this paper, the performance of two algorithms Chi-Square Automatic Interaction Detection (CHAID) and multilayer perceptron (MLP) would be matched using an Adult dataset to find out the percentage of an/the adults that earn > 50k and those that earn <= 50k per year. The two algorithms were studied and compared using IBM SPSS statistics software. The result for CHAID shows that the most important predictors are relationship and education. The algorithm shows that those are married (husband) and have qualification: Bachelor, Masters, Doctorate or Prof-school whose their age is > 41<57 earn > 50k. Also, multilayer perceptron displays marital status and capital gain as the most important predictors of the income. It also shows that individuals that their capital gain is less than 6,849 and are single, separated or widow, earn <= 50K, whereas individuals with their capital gain is > 6,849, work > 35 hrs/wk, and > 27yrs their income will be > 50k. By comparing the two algorithms, it is observed that both algorithms are reliable but there is strong reliability in CHAID which clearly shows that relation and education contribute to the prediction as displayed in the data visualisation.

Keywords: data mining, CHAID, multi-layer perceptron, SPSS, Adult dataset

Procedia PDF Downloads 382