Search results for: Efficient resources use
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3077

Search results for: Efficient resources use

197 Internal Accounting Controls

Authors: Alireza Azimi Sani , Shahram Chaharmahalie

Abstract:

Internal controls of accounting are an essential business function for a growth-oriented organization, and include the elements of risk assessment, information communications and even employees' roles and responsibilities. Internal controls of accounting systems are designed to protect a company from fraud, abuse and inaccurate data recording and help organizations keep track of essential financial activities. Internal controls of accounting provide a streamlined solution for organizing all accounting procedures and ensuring that the accounting cycle is completed consistently and successfully. Implementing a formal Accounting Procedures Manual for the organization allows the financial department to facilitate several processes and maintain rigorous standards. Internal controls also allow organizations to keep detailed records, manage and organize important financial transactions and set a high standard for the organization's financial management structure and protocols. A well-implemented system also reduces the risk of accounting errors and abuse. A well-implemented controls system allows a company's financial managers to regulate and streamline all functions of the accounting department. Internal controls of accounting can be set up for every area to track deposits, monitor check handling, keep track of creditor accounts, and even assess budgets and financial statements on an ongoing basis. Setting up an effective accounting system to monitor accounting reports, analyze records and protect sensitive financial information also can help a company set clear goals and make accurate projections. Creating efficient accounting processes allows an organization to set specific policies and protocols on accounting procedures, and reach its financial objectives on a regular basis. Internal accounting controls can help keep track of such areas as cash-receipt recording, payroll management, appropriate recording of grants and gifts, cash disbursements by authorized personnel, and the recording of assets. These systems also can take into account any government regulations and requirements for financial reporting.

Keywords: Internal controls, risk assessment, financial management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
196 Cluster Based Energy Efficient and Fault Tolerant n-Coverage in Wireless Sensor Network

Authors: D. Satish Kumar, N. Nagarajan

Abstract:

Coverage conservation and extend the network lifetime are the primary issues in wireless sensor networks. Due to the large variety of applications, coverage is focus to a wide range of interpretations. The applications necessitate that each point in the area is observed by only one sensor while other applications may require that each point is enclosed by at least sensors (n>1) to achieve fault tolerance. Sensor scheduling activities in existing Transparent and non- Transparent relay modes (T-NT) Mobile Multi-Hop relay networks fails to guarantee area coverage with minimal energy consumption and fault tolerance. To overcome these issues, Cluster based Energy Competent n- coverage scheme called (CEC n-coverage scheme) to ensure the full coverage of a monitored area while saving energy. CEC n-coverage scheme uses a novel sensor scheduling scheme based on the n-density and the remaining energy of each sensor to determine the state of all the deployed sensors to be either active or sleep as well as the state durations. Hence, it is attractive to trigger a minimum number of sensors that are able to ensure coverage area and turn off some redundant sensors to save energy and therefore extend network lifetime. In addition, decisive a smallest amount of active sensors based on the degree coverage required and its level. A variety of numerical parameters are computed using ns2 simulator on existing (T-NT) Mobile Multi-Hop relay networks and CEC n-coverage scheme. Simulation results showed that CEC n-coverage scheme in wireless sensor network provides better performance in terms of the energy efficiency, 6.61% reduced fault tolerant in terms of seconds and the percentage of active sensors to guarantee the area coverage compared to exiting algorithm.

Keywords: Wireless Sensor network, Mobile Multi-Hop relay networks, n-coverage, Cluster based Energy Competent, Transparent and non- Transparent relay modes, Fault Tolerant, sensor scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2121
195 Personnel Selection Based on Step-Wise Weight Assessment Ratio Analysis and Multi-Objective Optimization on the Basis of Ratio Analysis Methods

Authors: Emre Ipekci Cetin, Ebru Tarcan Icigen

Abstract:

Personnel selection process is considered as one of the most important and most difficult issues in human resources management. At the stage of personnel selection, the applicants are handled according to certain criteria, the candidates are dealt with, and efforts are made to select the most appropriate candidate. However, this process can be more complicated in terms of the managers who will carry out the staff selection process. Candidates should be evaluated according to different criteria such as work experience, education, foreign language level etc. It is crucial that a rational selection process is carried out by considering all the criteria in an integrated structure. In this study, the problem of choosing the front office manager of a 5 star accommodation enterprise operating in Antalya is addressed by using multi-criteria decision-making methods. In this context, SWARA (Step-wise weight assessment ratio analysis) and MOORA (Multi-Objective Optimization on the basis of ratio analysis) methods, which have relatively few applications when compared with other methods, have been used together. Firstly SWARA method was used to calculate the weights of the criteria and subcriteria that were determined by the business. After the weights of the criteria were obtained, the MOORA method was used to rank the candidates using the ratio system and the reference point approach. Recruitment processes differ from sector to sector, from operation to operation. There are a number of criteria that must be taken into consideration by businesses in accordance with the structure of each sector. It is of utmost importance that all candidates are evaluated objectively in the framework of these criteria, after these criteria have been carefully selected in the selection of suitable candidates for employment. In the study, staff selection process was handled by using SWARA and MOORA methods together.

Keywords: Accommodation establishments, human resource management, MOORA, multi criteria decision making, SWARA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1211
194 Spatial Planning and Tourism Development with Sustainability Model of the Territorial Tourist with Land Use Approach

Authors: Mehrangiz Rezaee, Zabih Charrahi

Abstract:

In the last decade, with increasing tourism destinations and tourism growth, we are witnessing the widespread impacts of tourism on the economy, environment and society. Tourism and its related economy are now undergoing a transformation and as one of the key pillars of business economics, it plays a vital role in the world economy. Activities related to tourism and providing services appropriate to it in an area, like many economic sectors, require the necessary context on its origin. Given the importance of tourism industry and tourism potentials of Yazd province in Iran, it is necessary to use a proper procedure for prioritizing different areas for proper and efficient planning. One of the most important goals of planning is foresight and creating balanced development in different geographical areas. This process requires an accurate study of the areas and potential and actual talents, as well as evaluation and understanding of the relationship between the indicators affecting the development of the region. At the global and regional level, the development of tourist resorts and the proper distribution of tourism destinations are needed to counter environmental impacts and risks. The main objective of this study is the sustainable development of suitable tourism areas. Given that tourism activities in different territorial areas require operational zoning, this study deals with the evaluation of territorial tourism using concepts such as land use, fitness and sustainable development. It is essential to understand the structure of tourism development and the spatial development of tourism using land use patterns, spatial planning and sustainable development. Tourism spatial planning implements different approaches. However, the development of tourism as well as the spatial development of tourism is complex, since tourist activities can be carried out in different areas with different purposes. Multipurpose areas have great important for tourism because it determines the flow of tourism. Therefore, in this paper, by studying the development and determination of tourism suitability that is related to spatial development, it is possible to plan tourism spatial development by developing a model that describes the characteristics of tourism. The results of this research determine the suitability of multi-functional territorial tourism development in line with spatial planning of tourism.

Keywords: Land use change, spatial planning, sustainability, territorial tourist, Yazd.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1082
193 Towards an Enhanced Quality of IPTV Media Server Architecture over Software Defined Networking

Authors: Esmeralda Hysenbelliu

Abstract:

The aim of this paper is to present the QoE (Quality of Experience) IPTV SDN-based media streaming server enhanced architecture for configuring, controlling, management and provisioning the improved delivery of IPTV service application with low cost, low bandwidth, and high security. Furthermore, it is given a virtual QoE IPTV SDN-based topology to provide an improved IPTV service based on QoE Control and Management of multimedia services functionalities. Inside OpenFlow SDN Controller there are enabled in high flexibility and efficiency Service Load-Balancing Systems; based on the Loading-Balance module and based on GeoIP Service. This two Load-balancing system improve IPTV end-users Quality of Experience (QoE) with optimal management of resources greatly. Through the key functionalities of OpenFlow SDN controller, this approach produced several important features, opportunities for overcoming the critical QoE metrics for IPTV Service like achieving incredible Fast Zapping time (Channel Switching time) < 0.1 seconds. This approach enabled Easy and Powerful Transcoding system via FFMPEG encoder. It has the ability to customize streaming dimensions bitrates, latency management and maximum transfer rates ensuring delivering of IPTV streaming services (Audio and Video) in high flexibility, low bandwidth and required performance. This QoE IPTV SDN-based media streaming architecture unlike other architectures provides the possibility of Channel Exchanging between several IPTV service providers all over the word. This new functionality brings many benefits as increasing the number of TV channels received by end –users with low cost, decreasing stream failure time (Channel Failure time < 0.1 seconds) and improving the quality of streaming services.

Keywords: Improved QoE, OpenFlow SDN controller, IPTV service application, softwarization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1001
192 Isolation and Probiotic Characterization of Arsenic-Resistant Lactic Acid Bacteria for Uptaking Arsenic

Authors: Jatindra N. Bhakta, Kouhei Ohnishi, Yukihiro Munekage, Kozo Iwasaki

Abstract:

The growing health hazardous impact of arsenic (As) contamination in environment is the impetus of the present investigation. Application of lactic acid bacteria (LAB) for the removal of toxic and heavy metals from water has been reported. This study was performed in order to isolate and characterize the Asresistant LAB from mud and sludge samples for using as efficient As uptaking probiotic. Isolation of As-resistant LAB colonies was performed by spread plate technique using bromocresol purple impregnated-MRS (BP-MRS) agar media provided with As @ 50 μg/ml. Isolated LAB were employed for probiotic characterization process, acid and bile tolerance, lactic acid production, antibacterial activity and antibiotic tolerance assays. After As-resistant and removal characterizations, the LAB were identified using 16S rDNA sequencing. A total of 103 isolates were identified as As-resistant strains of LAB. The survival of 6 strains (As99-1, As100-2, As101-3, As102-4, As105-7, and As112-9) was found after passing through the sequential probiotic characterizations. Resistant pattern pronounced hollow zones at As concentration >2000 μg/ml in As99-1, As100-2, and As101-3 LAB strains, whereas it was found at ~1000 μg/ml in rest 3 strains. Among 6 strains, the As uptake efficiency of As102-4 (0.006 μg/h/mg wet weight of cell) was higher (17 – 209%) compared to remaining LAB. 16S rDNA sequencing data of 3 (As99- 1, As100-2, and As101-3) and 3 (As102-4, As105-7, and As112-9) LAB strains clearly showed 97 to 99% (340 bp) homology to Pediococcus dextrinicus and Pediococcus acidilactici, respectively. Though, there was no correlation between the metal resistant and removal efficiency of LAB examined but identified elevated As removing LAB would probably be a potential As uptaking probiotic agent. Since present experiment concerned with only As removal from pure water, As removal and removal mechanism in natural condition of intestinal milieu should be assessed in future studies.

Keywords: Lactic acid bacteria, As-resistant, characterization, Pediococcus sp., As removal probiotic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2695
191 Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm

Authors: A. El Harraj, N. Raissouni

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: Video surveillance, background subtraction, Contrast Limited Histogram Equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2059
190 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion detection system (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw dataset for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle component analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. This optimal feature subset is used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) are used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle component analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2729
189 Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients

Authors: Mbainaibeye Jérôme, Noureddine Ellouze

Abstract:

Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.

Keywords: Image compression, wavelet transform, sign coding, magnitude coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
188 DNA of Hibiscus sabdariffa Damaged by Radiation from 900 MHz GSM Antenna

Authors: A. O. Oluwajobi, O. A. Falusi, N. A. Zubbair, T. Owoeye, F. Ladejobi, M. C. Dangana, A. Abubakar

Abstract:

The technology of mobile telephony has positively enhanced human life and reports on the bio safety of the radiation from their antennae have been contradictory, leading to serious litigations and violent protests by residents in several parts of the world. The crave for more information, as requested by WHO in order to resolve this issue, formed the basis for this study on the effect of the radiation from 900 MHz GSM antenna on the DNA of Hibiscus sabdariffa. Seeds of H. sabdariffa were raised in pots placed in three replicates at 100, 200, 300 and 400 metres from the GSM antennae in three selected test locations and a control where there was no GSM signal. Temperature (˚C) and the relative humidity (%) of study sites were measured for the period of study (24 weeks). Fresh young leaves were harvested from each plant at two, eight and twenty-four weeks after sowing and the DNA extracts were subjected to RAPD-PCR analyses. There were no significant differences between the weather conditions (temperature and relative humidity) in all the study locations. However, significant differences were observed in the intensities of radiations between the control (less than 0.02 V/m) and the test (0.40-1.01 V/m) locations. Data obtained showed that DNA of samples exposed to rays from GSM antenna had various levels of distortions, estimated at 91.67%. Distortions occurred in 58.33% of the samples between 2-8 weeks of exposure while 33.33% of the samples were distorted between 8-24 weeks exposure. Approximately 8.33% of the samples did not show distortions in DNA while 33.33% of the samples had their DNA damaged twice, both at 8 and at 24 weeks of exposure. The study showed that radiation from the 900 MHz GSM antenna is potent enough to cause distortions to DNA of H. sabdariffa even within 2-8 weeks of exposure. DNA damage was also independent of the distance from the antenna. These observations would qualify emissions from GSM mast as environmental hazard to the existence of plant biodiversities and all life forms in general. These results will trigger efforts to prevent further erosion of plant genetic resources which have been threatening food security and also the risks posed to living organisms, thereby making our environment very safe for our existence while we still continue to enjoy the benefits of the GSM technology.

Keywords: Damage, DNA, GSM antenna, radiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1149
187 Towards Innovation Performance among University Staff

Authors: C. S. Quah, S. P. L. Sim

Abstract:

This study examined how individuals in their respective teams contributed to innovation performance besides defining the term of innovation in their own respective views. This study also identified factors that motivated University staff to contribute to the innovation products. In addition, it examined whether there is a significant relationship between professional training level and the length of service among university staff towards innovation and to what extent do the two variables contributed towards innovative products. The significance of this study is that it revealed the strengths and weaknesses of the university staff when contributing to innovation performance. Stratified-random sampling was employed to determine the samples representing the population of lecturers in the study, involving 123 lecturers in one of the local universities in Malaysia. The method employed to analyze the data is through categorizing into themes for the open-ended questions besides using descriptive and inferential statistics for the quantitative data. This study revealed that two types of definition for the term “innovation” exist among the university staff, namely, creation of new product or new approach to do things as well as value-added creative way to upgrade or improve existing process and service to be more efficient. This study found that the most prominent factor that propels them towards innovation is to improve the product in order to benefit users, followed by selfsatisfaction and recognition. This implies that the staff in the organization viewed the creation of innovative products as a process of growth to fulfill the needs of others and also to realize their personal potential. This study also found that there was only a significant relationship between the professional training level and the length of service of 4 - 6 years among the university staff. The rest of the groups based on the length of service showed that there was no significant relationship with the professional training level towards innovation. Moreover, results of the study on directional measures depicted that the relationship for the length of service of 4- 6 years with professional training level among the university staff is quite weak. This implies that good organization management lies on the shoulders of the key leaders who enlighten the path to be followed by the staff.

Keywords: Innovation, length of service, performance, professional training level, motivation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
186 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris

Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini

Abstract:

International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.

Keywords: Deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 585
185 Similitude for Thermal Scale-up of a Multiphase Thermolysis Reactor in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

The thermochemical copper-chlorine (Cu-Cl) cycle is considered as a sustainable and efficient technology for a hydrogen production, when linked with clean-energy systems such as nuclear reactors or solar thermal plants. In the Cu-Cl cycle, water is decomposed thermally into hydrogen and oxygen through a series of intermediate reactions. This paper investigates the thermal scale up analysis of the three phase oxygen production reactor in the Cu-Cl cycle, where the reaction is endothermic and the temperature is about 530 oC. The paper focuses on examining the size and number of oxygen reactors required to provide enough heat input for different rates of hydrogen production. The type of the multiphase reactor used in this paper is the continuous stirred tank reactor (CSTR) that is heated by a half pipe jacket. The thermal resistance of each section in the jacketed reactor system is studied to examine its effect on the heat balance of the reactor. It is found that the dominant contribution to the system thermal resistance is from the reactor wall. In the analysis, the Cu-Cl cycle is assumed to be driven by a nuclear reactor where two types of nuclear reactors are examined as the heat source to the oxygen reactor. These types are the CANDU Super Critical Water Reactor (CANDU-SCWR) and High Temperature Gas Reactor (HTGR). It is concluded that a better heat transfer rate has to be provided for CANDU-SCWR by 3-4 times than HTGR. The effect of the reactor aspect ratio is also examined in this paper and is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Finally, a comparison between the results of heat balance and existing results of mass balance is performed and is found that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: Clean energy, Cu-Cl cycle, heat transfer, sustainable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
184 Spatial Variation of WRF Model Rainfall Prediction over Uganda

Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Triphonia Ngailo

Abstract:

Rainfall is a major climatic parameter affecting many sectors such as health, agriculture and water resources. Its quantitative prediction remains a challenge to weather forecasters although numerical weather prediction models are increasingly being used for rainfall prediction. The performance of six convective parameterization schemes, namely the Kain-Fritsch scheme, the Betts-Miller-Janjic scheme, the Grell-Deveny scheme, the Grell-3D scheme, the Grell-Fretas scheme, the New Tiedke scheme of the weather research and forecast (WRF) model regarding quantitative rainfall prediction over Uganda is investigated using the root mean square error for the March-May (MAM) 2013 season. The MAM 2013 seasonal rainfall amount ranged from 200 mm to 900 mm over Uganda with northern region receiving comparatively lower rainfall amount (200–500 mm); western Uganda (270–550 mm); eastern Uganda (400–900 mm) and the lake Victoria basin (400–650 mm). A spatial variation in simulated rainfall amount by different convective parameterization schemes was noted with the Kain-Fritsch scheme over estimating the rainfall amount over northern Uganda (300–750 mm) but also presented comparable rainfall amounts over the eastern Uganda (400–900 mm). The Betts-Miller-Janjic, the Grell-Deveny, and the Grell-3D underestimated the rainfall amount over most parts of the country especially the eastern region (300–600 mm). The Grell-Fretas captured rainfall amount over the northern region (250–450 mm) but also underestimated rainfall over the lake Victoria Basin (150–300 mm) while the New Tiedke generally underestimated rainfall amount over many areas of Uganda. For deterministic rainfall prediction, the Grell-Fretas is recommended for rainfall prediction over northern Uganda while the Kain-Fritsch scheme is recommended over eastern region.

Keywords: Convective parameterization schemes, March-May 2013 rainfall season, spatial variation of parameterization schemes over Uganda, WRF model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188
183 Assessment of Groundwater Chemistry and Quality Characteristics in an Alluvial Aquifer and a Single Plane Fractured-Rock Aquifer in Bloemfontein, South Africa

Authors: Modreck Gomo

Abstract:

The evolution of groundwater chemistry and its quality is largely controlled by hydrogeochemical processes and their understanding is therefore important for groundwater quality assessments and protection of the water resources. A study was conducted in Bloemfontein town of South Africa to assess and compare the groundwater chemistry and quality characteristics in an alluvial aquifer and single-plane fractured-rock aquifers. 9 groundwater samples were collected from monitoring boreholes drilled into the two aquifer systems during a once-off sampling exercise. Samples were collected through low-flow purging technique and analysed for major ions and trace elements. In order to describe the hydrochemical facies and identify dominant hydrogeochemical processes, the groundwater chemistry data are interpreted using stiff diagrams and principal component analysis (PCA), as complimentary tools. The fitness of the groundwater quality for domestic and irrigation uses is also assessed. Results show that the alluvial aquifer is characterised by a Na-HCO3 hydrochemical facie while fractured-rock aquifer has a Ca-HCO3 facie. The groundwater in both aquifers originally evolved from the dissolution of calcite rocks that are common on land surface environments. However the groundwater in the alluvial aquifer further goes through another evolution as driven by cation exchange process in which Na in the sediments exchanges with Ca2+ in the Ca-HCO3 hydrochemical type to result in the Na-HCO3 hydrochemical type. Despite the difference in the hydrogeochemical processes between the alluvial aquifer and single-plane fractured-rock aquifer, this did not influence the groundwater quality. The groundwater in the two aquifers is very hard as influenced by the elevated magnesium and calcium ions that evolve from dissolution of carbonate minerals which typically occurs in surface environments. Based on total dissolved levels (600-900 mg/L), groundwater quality of the two aquifer systems is classified to be of fair quality. The negative potential impacts of the groundwater quality for domestic uses are highlighted.

Keywords: Alluvial aquifer, fractured-rock aquifer, groundwater quality, hydrogeochemical processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 927
182 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: Bottom elevation, multi-view stereo, river, structure-from-motion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
181 Impact of Ownership Structure on Provision of Staff and Infrastructure for Implementing Computer Aided Design Curriculum in Universities in South-East Nigeria

Authors: Kelechi E. Ezeji

Abstract:

Instruction towards acquiring skills in the use of Computer Aided Design technologies has become a vital part of architectural education curriculum in the digital era. Its implementation, however, requires deployment of extra resources to build new infrastructure, acquisition and maintenance of new equipment, retraining of staff and recruitment of new ones who are knowledgeable in this area. This study sought to examine the impact that ownership structure of Nigerian universities had on provision of staff and infrastructure for implementing computer aided design curriculum with a view to developing a framework for the evaluation for appropriate implementation by the institutions. Survey research design was employed. The focus was on departments of architecture in universities in south-east Nigeria accredited by the National Universities Commission. Data were obtained in the areas of infrastructure and personnel for CAD implementation. A multi-stage stratified random sampling method was adopted. The first stage of stratification involved the accredited departments. Random sampling by balloting was then carried out. At the second stage, sampling size formulae was applied to obtain respondents’ number. For data analysis, analysis of variance tool for testing differences of means was used. With ρ < 0.5, the study found that there was significant difference between private-funded, state-funded and federal-funded departments of architecture in the provision of personnel and infrastructure. The implications of these findings were that for successful implementation leading to attainment of CAD proficiency to occur in every institution regardless of ownership structure, minimum evaluation guidelines needed to be set. A regular comparison of implementation in institutions was recommended as a means of rating performance. This will inform better interaction with those who consistently show weakness to challenge them towards improvement.

Keywords: Computer-aided design, curriculum, funding, infrastructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
180 Using 3-Glycidoxypropyltrimethoxysilane Functionalized SiO2 Nanoparticles to Improve Flexural Properties of Glass Fibers/Epoxy Grid-Stiffened Composite Panels

Authors: Reza Eslami-Farsani, Hamed Khosravi, Saba Fayazzadeh

Abstract:

Lightweight and efficient structures have the aim to enhance the efficiency of the components in various industries. Toward this end, composites are one of the most widely used materials because of durability, high strength and modulus, and low weight. One type of the advanced composites is grid-stiffened composite (GSC) structures, which have been extensively considered in aerospace, automotive, and aircraft industries. They are one of the top candidates for replacing some of the traditional components, which are used here. Although there are a good number of published surveys on the design aspects and fabrication of GSC structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Matrix modification using nanoparticles is an effective method to enhance the flexural properties of the fibrous composites. In the present study, a silanecoupling agent (3-glycidoxypropyltrimethoxysilane/3-GPTS) was introduced onto the silica (SiO2) nanoparticle surface and its effects on the three-point flexural response of isogrid E-glass/epoxy composites were assessed. Based on the Fourier Transform Infrared Spectrometer (FTIR) spectra, it was inferred that the 3-GPTS coupling agent was successfully grafted onto the surface of SiO2 nanoparticles after modification. Flexural test revealed an improvement of 16%, 14%, and 36% in stiffness, maximum load and energy absorption of the isogrid specimen filled with 3 wt.% 3- GPTS/SiO2 compared to the neat one. It would be worth mentioning that in these structures, considerable energy absorption was observed after the primary failure related to the load peak. In addition, 3- GPTMS functionalization had a positive effect on the flexural behavior of the multiscale isogrid composites. In conclusion, this study suggests that the addition of modified silica nanoparticles is a promising method to improve the flexural properties of the gridstiffened fibrous composite structures.

Keywords: Isogrid-stiffened composite panels, silica nanoparticles, surface modification, flexural properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2969
179 Similarity Solutions of Nonlinear Stretched Biomagnetic Flow and Heat Transfer with Signum Function and Temperature Power Law Geometries

Authors: M. G. Murtaza, E. E. Tzirtzilakis, M. Ferdows

Abstract:

Biomagnetic fluid dynamics is an interdisciplinary field comprising engineering, medicine, and biology. Bio fluid dynamics is directed towards finding and developing the solutions to some of the human body related diseases and disorders. This article describes the flow and heat transfer of two dimensional, steady, laminar, viscous and incompressible biomagnetic fluid over a non-linear stretching sheet in the presence of magnetic dipole. Our model is consistent with blood fluid namely biomagnetic fluid dynamics (BFD). This model based on the principles of ferrohydrodynamic (FHD). The temperature at the stretching surface is assumed to follow a power law variation, and stretching velocity is assumed to have a nonlinear form with signum function or sign function. The governing boundary layer equations with boundary conditions are simplified to couple higher order equations using usual transformations. Numerical solutions for the governing momentum and energy equations are obtained by efficient numerical techniques based on the common finite difference method with central differencing, on a tridiagonal matrix manipulation and on an iterative procedure. Computations are performed for a wide range of the governing parameters such as magnetic field parameter, power law exponent temperature parameter, and other involved parameters and the effect of these parameters on the velocity and temperature field is presented. It is observed that for different values of the magnetic parameter, the velocity distribution decreases while temperature distribution increases. Besides, the finite difference solutions results for skin-friction coefficient and rate of heat transfer are discussed. This study will have an important bearing on a high targeting efficiency, a high magnetic field is required in the targeted body compartment.

Keywords: Biomagnetic fluid, FHD, nonlinear stretching sheet, slip parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
178 Data Projects for “Social Good”: Challenges and Opportunities

Authors: Mikel Niño, Roberto V. Zicari, Todor Ivanov, Kim Hee, Naveed Mushtaq, Marten Rosselli, Concha Sánchez-Ocaña, Karsten Tolle, José Miguel Blanco, Arantza Illarramendi, Jörg Besier, Harry Underwood

Abstract:

One of the application fields for data analysis techniques and technologies gaining momentum is the area of social good or “common good”, covering cases related to humanitarian crises, global health care, or ecology and environmental issues, among others. The promotion of data-driven projects in this field aims at increasing the efficacy and efficiency of social initiatives, improving the way these actions help humanity in general and people in need in particular. This application field, however, poses its own barriers and challenges when developing data-driven projects, lagging behind in comparison with other scenarios. These challenges derive from aspects such as the scope and scale of the social issue to solve, cultural and political barriers, the skills of main stakeholders and the technological resources available, the motivation to be engaged in such projects, or the ethical and legal issues related to sensitive data. This paper analyzes the application of data projects in the field of social good, reviewing its current state and noteworthy initiatives, and presenting a framework covering the key aspects to analyze in such projects. The goal is to provide guidelines to understand the main challenges and opportunities for this type of data project, as well as identifying the main differential issues compared to “classical” data projects in general. A case study is presented on the initial steps and stakeholder analysis of a data project for the inclusion of refugees in the city of Frankfurt, Germany, in order to empirically confront the framework with a real example.

Keywords: Data-Driven projects, humanitarian operations, personal and sensitive data, social good, stakeholders analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
177 A Qualitative Study into the Success and Challenges in Embedding Evidence-Based Research Methods in Operational Policing Interventions

Authors: Ahmed Kadry, Gwyn Dodd

Abstract:

There has been a growing call globally for police forces to embed evidence-based policing research methods into police interventions in order to better understand and evaluate their impact. This research study highlights the success and challenges that police forces may encounter when trying to embed evidence-based research methods within their organisation. Ten in-depth qualitative interviews were conducted with police officers and staff at Greater Manchester Police (GMP) who were tasked with integrating evidence-based research methods into their operational interventions. The findings of the study indicate that with adequate resources and individual expertise, evidence-based research methods can be applied to operational work, including the testing of initiatives with strict controls in order to fully evaluate the impact of an intervention. However, the findings also indicate that this may only be possible where an operational intervention is heavily resourced with police officers and staff who have a strong understanding of evidence-based policing research methods, attained for example through their own graduate studies. In addition, the findings reveal that ample planning time was needed to trial operational interventions that would require strict parameters for what would be tested and how it would be evaluated. In contrast, interviewees underscored that operational interventions with the need for a speedy implementation were less likely to have evidence-based research methods applied. The study contributes to the wider literature on evidence-based policing by providing considerations for police forces globally wishing to apply evidence-based research methods to more of their operational work in order to understand their impact. The study also provides considerations for academics who work closely with police forces in assisting them to embed evidence-based policing. This includes how academics can provide their expertise to police decision makers wanting to underpin their work through evidence-based research methods, such as providing guidance on how to evaluate the impact of their work with varying research methods that they may otherwise be unaware of.

Keywords: evidence based policing, evidence-based practice, operational policing, organisational change

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 246
176 Wind Energy Resources Assessment and Micrositting on Different Areas of Libya: The Case Study in Darnah

Authors: F. Ahwide, Y. Bouker, K. Hatem

Abstract:

This paper presents long term wind data analysis in terms of annual and diurnal variations at different areas of Libya. The data of the wind speed and direction are taken each ten minutes for a period, at least two years, are used in the analysis. ‘WindPRO’ software and Excel workbook were used for the wind statistics and energy calculations. As for Darnah, average speeds are 10m, 20m and 40m and 6.57 m/s, 7.18 m/s, and 8.09 m/s, respectively. Highest wind speeds are observed at SSW, followed by S, WNW and NW sectors. Lowest wind speeds are observed between N and E sectors. Most frequent wind directions are NW and NNW. Hence, wind turbines can be installed against these directions. The most powerful sector is NW (31.3% of total expected wind energy), followed by 17.9% SSW, 11.5% NNW and 8.2% WNW

In Excel workbook, an estimation of annual energy yield at position of Derna, Al-Maqrun, Tarhuna and Al-Asaaba meteorological mast has been done, considering a generic wind turbine of 1.65 MW. (mtORRES, TWT 82-1.65MW) in position of meteorological mast. Three other turbines have been tested and a reduction of 18% over the net AEP. At 80m, the estimation of energy yield for Derna, Al- Maqrun, Tarhuna and Asaaba is 6.78 GWh or 3390 equivalent hours, 5.80 GWh or 2900 equivalent hours, 4.91 GWh or 2454 equivalent hours and 5.08 GWh or 2541 equivalent hours respectively. It seems a fair value in the context of a possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Furthermore, an estimation of annual energy yield at positions of Misalatha, Azizyah and Goterria meteorological mast has been done, considering a generic wind turbine of 2 MW. We found that, at 80 m the estimation of energy yield is 3.12 GWh or 1557 equivalent hours, 4.47 GWh or 2235 equivalent hours and 4.07GWh or 2033 respectively.

It seems a very poor value in the context of possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Anyway, more data and a detailed wind farm study would be necessary to draw conclusions.

Keywords: Wind turbines, wind data, energy yield, micrositting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2607
175 Complex Network Approach to International Trade of Fossil Fuel

Authors: Semanur Soyyiğit Kaya, Ercan Eren

Abstract:

Energy has a prominent role for development of nations. Countries which have energy resources also have strategic power in the international trade of energy since it is essential for all stages of production in the economy. Thus, it is important for countries to analyze the weaknesses and strength of the system. On the other side, international trade is one of the fields that are analyzed as a complex network via network analysis. Complex network is one of the tools to analyze complex systems with heterogeneous agents and interaction between them. A complex network consists of nodes and the interactions between these nodes. Total properties which emerge as a result of these interactions are distinct from the sum of small parts (more or less) in complex systems. Thus, standard approaches to international trade are superficial to analyze these systems. Network analysis provides a new approach to analyze international trade as a network. In this network, countries constitute nodes and trade relations (export or import) constitute edges. It becomes possible to analyze international trade network in terms of high degree indicators which are specific to complex networks such as connectivity, clustering, assortativity/disassortativity, centrality, etc. In this analysis, international trade of crude oil and coal which are types of fossil fuel has been analyzed from 2005 to 2014 via network analysis. First, it has been analyzed in terms of some topological parameters such as density, transitivity, clustering etc. Afterwards, fitness to Pareto distribution has been analyzed via Kolmogorov-Smirnov test. Finally, weighted HITS algorithm has been applied to the data as a centrality measure to determine the real prominence of countries in these trade networks. Weighted HITS algorithm is a strong tool to analyze the network by ranking countries with regards to prominence of their trade partners. We have calculated both an export centrality and an import centrality by applying w-HITS algorithm to the data. As a result, impacts of the trading countries have been presented in terms of high-degree indicators.

Keywords: Complex network approach, fossil fuel, international trade, network theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2360
174 The Reason of Principles of Construction Engineering and Management Being Necessary for Contracting Firms and Their Projects Managers

Authors: Mamoon Mousa Atout

Abstract:

The industries of construction are in continuous growth not only in Middle East rejoin but almost all over the world. For the last fifteen years, big expansion and increase of different types of projects has been observed. Many infrastructural projects have been developed, high rise buildings, big shopping malls, power sub-stations, roads, bridges, schools, universities and developing many of new cities with full and complete facilities. The growth and enlargement of the mentioned developed projects has been accomplished through many international and local contracting organizations. Senior management of these organizations depend on their qualified and experienced team whom are aware of the implications of project management, construction management, engineering management and resource management during tendering till final completion of the project. This research aims to find out why reasons of principles of construction engineering and management are necessary for contracting firms and their managers. Principles of construction management help contracting organizations to accomplish and deliver projects without delay. This can be maintained by establishing guidelines’ details for updating the adopted system of construction management that they have through qualified and experienced project managers. The research focuses on benefits of other essential skills of projects planning, monitoring and control. Defining roles and responsibilities of contractor project managers during tendering and execution is a part of the investigated factors that will be analyzed. Other skills like optimizing and utilizing the obtainable project resources to deliver the project within time, cost and quality will be also investigated to find out how these factors are affecting the performance of contracting firms, projects managers and projects. The conclusion of the research will help senior management team and the contractors project managers about the benefits of implications and benefits construction management system and its effect upon the performance and knowledge of contract values that they have, and the optimal profit margin of the firm it.

Keywords: Construction management, contracting firms, project managers, planning processes, roles and responsibilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
173 The Study of Tourists’ Behavior in Water Usage in Hotel Business: Case Study of Phuket Province, Thailand

Authors: A. Pensiri, K. Nantaporn, P. Parichut

Abstract:

Tourism is very important to the economy of many countries due to the large contribution in the areas of employment and income generation. However, the rapid growth of tourism can also be considered as one of the major uses of water user, and therefore also have a significant and detrimental impact on the environment. Guest behavior in water usage can be used to manage water in hotels for sustainable water resources management. This research presents a study of hotel guest water usage behavior at two hotels, namely Hotel A (located in Kathu district) and Hotel B (located in Muang district) in Phuket Province, Thailand, as case studies. Primary and secondary data were collected from the hotel manager through interview and questionnaires. The water flow rate was measured in-situ from each water supply device in the standard room type at each hotel, including hand washing faucets, bathroom faucets, shower and toilet flush. For the interview, the majority of respondents (n = 204 for Hotel A and n = 244 for Hotel B) were aged between 21 years and 30 years (53% for Hotel A and 65% for Hotel B) and the majority were foreign (78% in Hotel A, and 92% in Hotel B) from American, France and Austria for purposes of tourism (63% in Hotel A, and 55% in Hotel B). The data showed that water consumption ranged from 188 litres to 507 liters, and 383 litres to 415 litres per overnight guest in Hotel A and Hotel B (n = 244), respectively. These figures exceed the water efficiency benchmark set for Tropical regions by the International Tourism Partnership (ITP). It is recommended that guest water saving initiatives should be implemented at hotels. Moreover, the results showed that guests have high satisfaction for the hotels, the front office service reveal the top rates of average score of 4.35 in Hotel A and 4.20 in Hotel B, respectively, while the luxury decoration and room cleanliness exhibited the second satisfaction scored by the guests in Hotel A and B, respectively. On the basis of this information, the findings can be very useful to improve customer service satisfaction and pay attention to this particular aspect for better hotel management.

Keywords: Hotel, tourism, Phuket, water usage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2262
172 Optimizing Organizational Performance: The Critical Role of Headcount Budgeting in Strategic Alignment and Financial Stability

Authors: Shobhit Mittal

Abstract:

Headcount budgeting stands as a pivotal element in organizational financial management, extending beyond traditional budgeting to encompass strategic resource allocation for workforce-related expenses. This process is integral to maintaining financial stability and fostering a productive workforce, requiring a comprehensive analysis of factors such as market trends, business growth projections, and evolving workforce skill requirements. It demands a collaborative approach, primarily involving Human Resources (HR) and finance departments, to align workforce planning with an organization's financial capabilities and strategic objectives. The dynamic nature of headcount budgeting necessitates continuous monitoring and adjustment in response to economic fluctuations, business strategy shifts, technological advancements, and market dynamics. Its significance in talent management is also highlighted, aligning financial planning with talent acquisition and retention strategies to ensure a competitive edge in the market. The consequences of incorrect headcount budgeting are explored, showing how it can lead to financial strain, operational inefficiencies, and hindered strategic objectives. Examining case studies like IBM's strategic workforce rebalancing and Microsoft's shift for long-term success, the importance of aligning headcount budgeting with organizational goals is underscored. These examples illustrate that effective headcount budgeting transcends its role as a financial tool, emerging as a strategic element crucial for an organization's success. This necessitates continuous refinement and adaptation to align with evolving business goals and market conditions, highlighting its role as a key driver in organizational success and sustainability.

Keywords: Strategic planning, fiscal budget, headcount planning, resource allocation, financial management, decision-making, operational efficiency, risk management, headcount budget.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 54
171 Modeling Stress-Induced Regulatory Cascades with Artificial Neural Networks

Authors: Maria E. Manioudaki, Panayiota Poirazi

Abstract:

Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.

Keywords: gene modules, artificial neural networks, yeast, stress

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435
170 GridNtru: High Performance PKCS

Authors: Narasimham Challa, Jayaram Pradhan

Abstract:

Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.

Keywords: Alchemi, GridNtru, Ntru, PKCS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
169 Effect of Urea Deep Placement Technology Adoption on the Production Frontier: Evidence from Irrigation Rice Farmers in the Northern Region of Ghana

Authors: Shaibu Baanni Azumah, William Adzawla

Abstract:

Rice is an important staple crop, with current demand higher than the domestic supply in Ghana. This has led to a high and unfavourable import bill. Therefore, recent policies and interventions in the agricultural sub-sector aim at promoting various improved agricultural technologies in order to improve domestic production and reduce the importation of rice. In this study, we examined the effect of the adoption of Urea Deep Placement (UDP) technology by rice farmers on the position of the production frontier. This involved 200 farmers selected through a multi stage sampling technique in the Northern region of Ghana. A Cobb-Douglas stochastic frontier model was fitted. The result showed that the adoption of UDP technology shifts the output frontier outward and also move the farmers closer to the frontier. Farmers were also operating under diminishing returns to scale which calls for redress. Other factors that significantly influenced rice production were farm size, labour, use of certified seeds and NPK fertilizer. Although there was an opportunity for improvement, the farmers were highly efficient (92%), compared to previous studies. Farmers’ efficiency was improved through increased education, household size, experience, access to credit, and lack of extension service provision by MoFA. The study recommends the revision of Ghana’s agricultural policy to include the UDP technology. Agricultural Extension officers of the Ministry of Food and Agriculture (MoFA) should be trained on the UDP technology to support IFDC’s drive to improve adoption by rice farmers. Rice farmers are also encouraged to expand their farm lands, improve plant population, and also increase the usage of fertilizer to improve yields. Mechanisms through which credit can be made easily accessible and effectively utilised should be identified and promoted.

Keywords: Efficiency, rice farmers, stochastic frontier, UDP technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
168 Person Identification using Gait by Combined Features of Width and Shape of the Binary Silhouette

Authors: M.K. Bhuyan, Aragala Jagan.

Abstract:

Current image-based individual human recognition methods, such as fingerprints, face, or iris biometric modalities generally require a cooperative subject, views from certain aspects, and physical contact or close proximity. These methods cannot reliably recognize non-cooperating individuals at a distance in the real world under changing environmental conditions. Gait, which concerns recognizing individuals by the way they walk, is a relatively new biometric without these disadvantages. The inherent gait characteristic of an individual makes it irreplaceable and useful in visual surveillance. In this paper, an efficient gait recognition system for human identification by extracting two features namely width vector of the binary silhouette and the MPEG-7-based region-based shape descriptors is proposed. In the proposed method, foreground objects i.e., human and other moving objects are extracted by estimating background information by a Gaussian Mixture Model (GMM) and subsequently, median filtering operation is performed for removing noises in the background subtracted image. A moving target classification algorithm is used to separate human being (i.e., pedestrian) from other foreground objects (viz., vehicles). Shape and boundary information is used in the moving target classification algorithm. Subsequently, width vector of the outer contour of binary silhouette and the MPEG-7 Angular Radial Transform coefficients are taken as the feature vector. Next, the Principal Component Analysis (PCA) is applied to the selected feature vector to reduce its dimensionality. These extracted feature vectors are used to train an Hidden Markov Model (HMM) for identification of some individuals. The proposed system is evaluated using some gait sequences and the experimental results show the efficacy of the proposed algorithm.

Keywords: Gait Recognition, Gaussian Mixture Model, PrincipalComponent Analysis, MPEG-7 Angular Radial Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881