Search results for: valid length
1287 Genetic Characteristics of Chicken Anemia Virus Circulating in Northern Vietnam
Authors: Hieu Van Dong, Giang Thi Huong Tran, Giap Van Nguyen, Tung Duy Dao, Vuong Nghia Bui, Le Thi My Huynh, Yohei Takeda, Haruko Ogawa, Kunitoshi Imai
Abstract:
Chicken anemia virus (CAV) has a ubiquitous and worldwide distribution in chicken production. Our group previously reported high seroprevalence of CAV in chickens in northern Vietnam. In the present study, 330 tissue samples collected from commercial and breeder chicken farms in eleven provinces in northern Vietnam were tested for the CAV infection. We found that 157 out of 330 (47.58%) chickens were positive with CAV genes by real-time PCR method. Nine CAV strains obtained from the different location and time were forwarded to the full-length sequence of CAV VP1 gene. Phylogenetic analysis of the Vietnamese CAV vp1 gene indicated that the CAVs circulating in northern Vietnam were divided into three distinct genotypes, II, III, and V, but not clustered with the vaccine strains. Among the three genotypes, genotype III was the major one widely spread in Vietnam, and that included three sub-genotypes, IIIa, IIIb, and IIIc. The Vietnamese CAV strains were closely related to the Chinese, Taiwanese, and USA strains. All the CAV isolates had glutamine at amino acid position 394 in the VP1 gene, suggesting that they might be highly pathogenic strains. One strain was defined to be genotype V, which had not been reported for Vietnamese CAVs. Additional studies are required to further evaluate the pathogenicity of CAV strains circulating in Vietnam.Keywords: chicken anemia virus, genotype, genetic characteristics, Vietnam
Procedia PDF Downloads 1671286 Measuring the Biomechanical Effects of Worker Skill Level and Joystick Crane Speed on Forestry Harvesting Performance Using a Simulator
Authors: Victoria L. Chester, Usha Kuruganti
Abstract:
The forest industry is a major economic sector of Canada and also one of the most dangerous industries for workers. The use of mechanized mobile forestry harvesting machines has successfully reduced the incidence of injuries in forest workers related to manual labor. However, these machines have also created additional concerns, including a high machine operation learning curve, increased the length of the workday, repetitive strain injury, cognitive load, physical and mental fatigue, and increased postural loads due to sitting in a confined space. It is critical to obtain objective performance data for employers to develop appropriate work practices for this industry, however ergonomic field studies of this industry are lacking mainly due to the difficulties in obtaining comprehensive data while operators are cutting trees in the woods. The purpose of this study was to establish a measurement and experimental protocol to examine the effects of worker skill level and movement training speed (joystick crane speed) on harvesting performance using a forestry simulator. A custom wrist angle measurement device was developed as part of the study to monitor Euler angles during operation of the simulator. The device of the system consisted of two accelerometers, a Bluetooth module, three 3V coin cells, a microcontroller, a voltage regulator and an application software. Harvesting performance and crane data was provided by the simulator software and included tree to frame collisions, crane to tree collisions, boom tip distance, number of trees cut, etc. A pilot study of 3 operators with various skill levels was tested to identify factors that distinguish highly skilled operators from novice or intermediate operators. Dependent variables such as reaction time, math skill, past work experience, training movement speed (e.g. joystick control speeds), harvesting experience level, muscle activity, and wrist biomechanics were measured and analyzed. A 10-channel wireless surface EMG system was used to monitor the amplitude and mean frequency of 10 upper extremity muscles during pre and postperformance on the forestry harvest stimulator. The results of the pilot study showed inconsistent changes in median frequency pre-and postoperation, but there was the increase in the activity of the flexor carpi radialis, anterior deltoid and upper trapezius of both arms. The wrist sensor results indicated that wrist supination and pronation occurred more than flexion and extension with radial-ulnar rotation demonstrating the least movement. Overall, wrist angular motion increased as the crane speed increased from slow to fast. Further data collection is needed and will help industry partners determine those factors that separate skill levels of operators, identify optimal training speeds, and determine the length of training required to bring new operators to an efficient skill level effectively. In addition to effective and employment training programs, results of this work will be used for selective employee recruitment strategies to improve employee retention after training. Further, improved training procedures and knowledge of the physical and mental demands on workers will lead to highly trained and efficient personnel, reduced risk of injury, and optimal work protocols.Keywords: EMG, forestry, human factors, wrist biomechanics
Procedia PDF Downloads 1451285 Fluorescence in situ Hybridization (FISH) Detection of Bacteria and Archaea in Fecal Samples
Authors: Maria Nejjari, Michel Cloutier, Guylaine Talbot, Martin Lanthier
Abstract:
The fluorescence in situ hybridization (FISH) is a staining technique that allows the identification, detection and quantification of microorganisms without prior cultivation by means of epifluorescence and confocal laser scanning microscopy (CLSM). Oligonucleotide probes have been used to detect bacteria and archaea that colonize the cattle and swine digestive systems. These bacterial strains have been obtained from fecal samples issued from cattle manure and swine slurry. The collection of these samples has been done at 3 different pit’s levels A, B and C with same height. Two collection depth levels have been taken in consideration, one collection level just under the pit’s surface and the second one at the bottom of the pit. Cells were fixed and FISH was performed using oligonucleotides of 15 to 25 nucleotides of length associated with a fluorescent molecule Cy3 or Cy5. The double hybridization using Cy3 probe targeting bacteria (Cy3-EUB338-I) along with a Cy5 probe targeting Archaea (Gy5-ARCH915) gave a better signal. The CLSM images show that there are more bacteria than archaea in swine slurry. However, the choice of fluorescent probes is critical for getting the double hybridization and a unique signature for each microorganism. FISH technique is an easy way to detect pathogens like E. coli O157, Listeria, Salmonella that easily contaminate water streams, agricultural soils and, consequently, food products and endanger human health.Keywords: archaea, bacteria, detection, FISH, fluorescence
Procedia PDF Downloads 3871284 Effects of Different Fiber Orientations on the Shear Strength Performance of Composite Adhesive Joints
Authors: Ferhat Kadioglu, Hasan Puskul
Abstract:
A composite material with carbon fiber and polymer matrix has been used as adherent for manufacturing adhesive joints. In order to evaluate different fiber orientations on joint performance, the adherents with the 0°, ±15°, ±30°, ±45° fiber orientations were used in the single lap joint configuration. The joints with an overlap length of 25 mm were prepared according to the ASTM 1002 specifications and subjected to tensile loadings. The structural adhesive used was a two-part epoxy to be cured at 70°C for an hour. First, mechanical behaviors of the adherents were measured using three point bending test. In the test, considerations were given to stress to failure and elastic modulus. The results were compared with theoretical ones using rule of mixture. Then, the joints were manufactured in a specially prepared jig, after a proper surface preparation. Experimental results showed that the fiber orientations of the adherents affected the joint performance considerably; the joints with ±45° adherents experienced the worst shear strength, half of those with 0° adherents, and in general, there was a great relationship between the fiber orientations and failure mechanisms. Delamination problems were observed for many joints, which were thought to be due to peel effects at the ends of the overlap. It was proved that the surface preparation applied to the adherent surface was adequate. For further explanation of the results, a numerical work should be carried out using a possible non-linear analysis.Keywords: composite materials, adhesive bonding, bonding strength, lap joint, tensile strength
Procedia PDF Downloads 3701283 Effects of Continuous and Periodic Aerobic Exercises on C Reactive Protein in Overweight Women
Authors: Maesoomeh Khorshidi Mehr, Mohammad Sajadian, Shadi Alipour
Abstract:
The purpose of the present study was to compare the effects of eight weeks of continuous and periodic aerobic exercises on serum levels of CRP in overweight woman. 36 woman aged between 20 and 35 years from the city of Ahwaz were randomly selected as the sample of the study. This sample was further divided into three groups (n= 12) of continuous aerobic exercise, periodic aerobic exercise, and control. Subjects of the groups of continuous and periodic aerobic exercise participated in 8 weeks of specialized exercises while the control group subjects did not take part in any regular physical activity program. Blood samples were collected from subjects in 24 hours prior to and 48 hours past to the intervention period. Afterwards, the serum level of CRP was measured for each blood sample. Results showed that BMI and serum level of CRP both significantly reduced as a result of aerobic exercises. However, no statistically significant difference was recorded between the extent of effects of the former and latter aerobic exercise types. Eight weeks of aerobic exercise will probably result in reduced inflammation and cardiovascular diseases risk in overweight women. The reason for lack of difference between effects of continuous and periodic aerobic exercise may lie in the similarity of average intensity and length of physical administered activities.Keywords: heart diseases, aerobic exercise, inflammation, CRP, overweight
Procedia PDF Downloads 2011282 Trade-Offs between Verb Frequency and Syntactic Complexity in Children with Developmental Language Disorder
Authors: Pui I. Chao, Shanju Lin
Abstract:
Purpose: Children with developmental language disorder (DLD) have persistent language difficulties and often face great challenges when demands are high. The aim of this study was to investigate whether verb frequency would trade-off with syntactic complexity when they talk. Method: Forty-five children with DLD, 45 chronological age matches with TD (AGE), and 45 MLU-matches with TD (MLU) who were Mandarin speakers were selected from the previous study. Language samples were collected under three contexts: conversation about children’s family and school, story retelling, and free play. MLU, verb density, utterance length difference, verb density difference, and average verb frequency were calculated and further analyzed by ANOVAs. Results: Children with DLD and their MLU matches produced shorter utterances and used fewer verbs in expressions than the AGE matches. Compared to their AGE matches, the DLD group used more verbs and verbs with lower frequency in shorter utterances but used fewer verbs and verbs with higher frequency in longer utterances. Conclusion: Mandarin-speaking children with DLD showed difficulties in verb usage and were more vulnerable to trade-offs than their age-matched peers in utterances with high demand. As a result, task demand should be taken into account as speech-language pathologists assess whether children with DLD have adequate abilities in verb usage.Keywords: developmental language disorder, syntactic complexity, trade-offs, verb frequency
Procedia PDF Downloads 1541281 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning
Authors: Pei Yi Lin
Abstract:
Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model
Procedia PDF Downloads 751280 Principles for the Realistic Determination of the in-situ Concrete Compressive Strength under Consideration of Rearrangement Effects
Authors: Rabea Sefrin, Christian Glock, Juergen Schnell
Abstract:
The preservation of existing structures is of great economic interest because it contributes to higher sustainability and resource conservation. In the case of existing buildings, in addition to repair and maintenance, modernization or reconstruction works often take place in the course of adjustments or changes in use. Since the structural framework and the associated load level are usually changed in the course of the structural measures, the stability of the structure must be verified in accordance with the currently valid regulations. The concrete compressive strength of the existing structures concrete and the derived mechanical parameters are of central importance for the recalculation and verification. However, the compressive strength of the existing concrete is usually set comparatively low and thus underestimated. The reasons for this are too small numbers, and large scatter of material properties of the drill cores, which are used for the experimental determination of the design value of the compressive strength. Within a structural component, the load is usually transferred over the area with higher stiffness and consequently with higher compressive strength. Therefore, existing strength variations within a component only play a subordinate role due to rearrangement effects. This paper deals with the experimental and numerical determination of such rearrangement effects in order to calculate the concrete compressive strength of existing structures more realistic and economical. The influence of individual parameters such as the specimen geometry (prism or cylinder) or the coefficient of variation of the concrete compressive strength is analyzed in experimental small-part tests. The coefficients of variation commonly used in practice are adjusted by dividing the test specimens into several layers consisting of different concretes, which are monolithically connected to each other. From each combination, a sufficient number of the test specimen is produced and tested to enable evaluation on a statistical basis. Based on the experimental tests, FE simulations are carried out to validate the test results. In the frame of a subsequent parameter study, a large number of combinations is considered, which had not been investigated in the experimental tests yet. Thus, the influence of individual parameters on the size and characteristic of the rearrangement effect is determined and described more detailed. Based on the parameter study and the experimental results, a calculation model for a more realistic determination of the in situ concrete compressive strength is developed and presented. By considering rearrangement effects in concrete during recalculation, a higher number of existing structures can be maintained without structural measures. The preservation of existing structures is not only decisive from an economic, sustainable, and resource-saving point of view but also represents an added value for cultural and social aspects.Keywords: existing structures, in-situ concrete compressive strength, rearrangement effects, recalculation
Procedia PDF Downloads 1181279 The Learning Loops in the Public Realm Project in South Verona: Air Quality and Noise Pollution Participatory Data Collection towards Co-Design, Planning and Construction of Mitigation Measures in Urban Areas
Authors: Massimiliano Condotta, Giovanni Borga, Chiara Scanagatta
Abstract:
Urban systems are places where the various actors involved interact and enter in conflict, in particular with reference to topics such as traffic congestion and security. But topics of discussion, and often clash because of their strong complexity, are air and noise pollution. For air pollution, the complexity stems from the fact that atmospheric pollution is due to many factors, but above all, the observation and measurement of the amount of pollution of a transparent, mobile and ethereal element like air is very difficult. Often the perceived condition of the inhabitants does not coincide with the real conditions, because it is conditioned - sometimes in positive ways other in negative ways - from many other factors such as the presence, or absence, of natural elements such as trees or rivers. These problems are seen with noise pollution as well, which is also less considered as an issue even if it’s problematic just as much as air quality. Starting from these opposite positions, it is difficult to identify and implement valid, and at the same time shared, mitigation solutions for the problem of urban pollution (air and noise pollution). The LOOPER (Learning Loops in the Public Realm) project –described in this paper – wants to build and test a methodology and a platform for participatory co-design, planning, and construction process inside a learning loop process. Novelties in this approach are various; the most relevant are three. The first is that citizens participation starts since from the research of problems and air quality analysis through a participatory data collection, and that continues in all process steps (design and construction). The second is that the methodology is characterized by a learning loop process. It means that after the first cycle of (1) problems identification, (2) planning and definition of design solution and (3) construction and implementation of mitigation measures, the effectiveness of implemented solutions is measured and verified through a new participatory data collection campaign. In this way, it is possible to understand if the policies and design solution had a positive impact on the territory. As a result of the learning process produced by the first loop, it will be possible to improve the design of the mitigation measures and start the second loop with new and more effective measures. The third relevant aspect is that the citizens' participation is carried out via Urban Living Labs that involve all stakeholder of the city (citizens, public administrators, associations of all urban stakeholders,…) and that the Urban Living Labs last for all the cycling of the design, planning and construction process. The paper will describe in detail the LOOPER methodology and the technical solution adopted for the participatory data collection and design and construction phases.Keywords: air quality, co-design, learning loops, noise pollution, urban living labs
Procedia PDF Downloads 3651278 Long Waves Inundating through and around an Array of Circular Cylinders
Authors: Christian Klettner, Ian Eames, Tristan Robinson
Abstract:
Tsunami is characterised by their very long time periods and can have devastating consequences when these inundate through built-up coastal regions as in the 2004 Indian Ocean and 2011 Tohoku Tsunami. This work aims to investigate the effect of these long waves on the flow through and around a group of buildings, which are abstracted to circular cylinders. The research approach used in this study was using experiments and numerical simulations. Large-scale experiments were carried out at HR Wallingford. The novelty of these experiments is (I) the number of bodies present (up to 64), (II) the long wavelength of the input waves (80 seconds) and (III) the width of the tank (4m) which gives the unique opportunity to investigate three length scales, namely the diameter of the building, the diameter of the array and the width of the tank. To complement the experiments, dam break flow past the same arrays is investigated using three-dimensional numerical simulations in OpenFOAM. Dam break flow was chosen as it is often used as a surrogate for the tsunami in previous research and is used here as there are well defined initial conditions and high quality previous experimental data for the case of a single cylinder is available. The focus of this work is to better understand the effect of the solid void fraction on the force and flow through and around the array. New qualitative and quantitative diagnostics are developed and tested to analyse the complex coupled interaction between the cylinders.Keywords: computational fluid dynamics, tsunami, forces, complex geometry
Procedia PDF Downloads 1951277 Accurate Position Electromagnetic Sensor Using Data Acquisition System
Authors: Z. Ezzouine, A. Nakheli
Abstract:
This paper presents a high position electromagnetic sensor system (HPESS) that is applicable for moving object detection. The authors have developed a high-performance position sensor prototype dedicated to students’ laboratory. The challenge was to obtain a highly accurate and real-time sensor that is able to calculate position, length or displacement. An electromagnetic solution based on a two coil induction principal was adopted. The HPESS converts mechanical motion to electric energy with direct contact. The output signal can then be fed to an electronic circuit. The voltage output change from the sensor is captured by data acquisition system using LabVIEW software. The displacement of the moving object is determined. The measured data are transmitted to a PC in real-time via a DAQ (NI USB -6281). This paper also describes the data acquisition analysis and the conditioning card developed specially for sensor signal monitoring. The data is then recorded and viewed using a user interface written using National Instrument LabVIEW software. On-line displays of time and voltage of the sensor signal provide a user-friendly data acquisition interface. The sensor provides an uncomplicated, accurate, reliable, inexpensive transducer for highly sophisticated control systems.Keywords: electromagnetic sensor, accurately, data acquisition, position measurement
Procedia PDF Downloads 2851276 Comparison of Two Theories for the Critical Laser Radius in Thermal Quantum Plasma
Authors: Somaye Zare
Abstract:
The critical beam radius is a significant factor that predicts the behavior of the laser beam in the plasma, so if the laser beam radius is adequately greater in comparison to it, the beam will experience stable focusing on the plasma; otherwise, the beam will diverge after entering into the plasma. In this work, considering the paraxial approximation and moment theories, the localization of a relativistic laser beam in thermal quantum plasma is investigated. Using the dielectric function obtained in the quantum hydrodynamic model, the mathematical equation for the laser beam width parameter is attained and solved numerically by the fourth-order Runge-Kutta method. The results demonstrate that the stouter focusing effect is occurred in the moment theory compared to the paraxial approximation. Besides, similar to the two theories, with increasing Fermi temperature, plasma density, and laser intensity, the oscillation rate of the beam width parameter growths and focusing length reduces which means improving the focusing effect. Furthermore, it is understood that behaviors of the critical laser radius are different in the two theories, in the paraxial approximation, the critical radius after a minimum value is enhanced with increasing laser intensity, but in the moment theory, with increasing laser intensity, the critical radius decreases until it becomes independent of the laser intensity.Keywords: laser localization, quantum plasma, paraxial approximation, moment theory, quantum hydrodynamic model
Procedia PDF Downloads 721275 The impact of Breast Cancer Polymorphism on Breast Cancer
Authors: Roudabeh Vakil Monfared, Farhad Mashayekhi
Abstract:
Breast cancer is the most common malignancy type among women with about 1 million new cases each year. The immune system plays an important role in the breast cancer development. OX40L (also known as TNFSF4), a membrane protein, which is a member of the tumor necrosis factor super family binds to its receptor OX40 and this co-stimulation has a crucial role in T-cell proliferation, survival and cytokine release. Due to the importance of the T-cells in anti-tumor activities of OX40L we studied the association of rs3850641 (T→C) polymorphism of OX40L gene with breast cancer. The study included 123 women with breast cancer and 126 healthy volunteers with no signs of cancer. Genomic DNA was extracted from blood leucocytes. Genotype and allele frequencies were determined in patients and control cases with the method of polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) and the analysis was performed by Med Calc. The prevalence of genotype frequencies of TT, CT and CC were 60.9%, 30.08% and 8.9 % in patients with breast cancer and 74.6 %, 18.25 % and 7.14 % in healthy volunteers while the T and C allelic frequency was 76.01% and 23.98 % in patients and 83.73% and 16.26% in healthy controls. Respectively Statistical analysis has shown no significant difference from the comparison of either genotype (P=0.06). According to these results, the rs3850641 SNP has no association with the susceptibility of breast cancer in a population in northern Iran. However, further studies in larger populations including other genetic and environmental factors are required to achieve conclusion.Keywords: OX40L, gene, polymorphism, breast cancer
Procedia PDF Downloads 5351274 Conceptualising Queercide: A Quantitative Desktop Exploration of the Technical Frames Used in Online Repors of Lesbian Killings in South Africa
Authors: Marchant Van Der Schyff
Abstract:
South Africa remains one of the most dangerous places for women – lesbians in particular – to live freely and safely, where a culture of patriarchy and a lack of socio-economic opportunity are ubiquitous throughout its communities. While the Internet has given a wider platform to provide insights to issues plaguing lesbians, very little information exists regarding the elements used in the construction of these online reports. This is not only due to the lack of language required to contextualise lesbian issues, but also persistent institutional and societal homophobia. This article describes the technical frames used in the online news reporting of four case studies of ‘queercide’. Through using a thematic coding sheet, data was collected from 70 online articles purposively selected based on priori population characteristics. The study found technical elements, such as the length of online reports, credible sources used, ‘code driven’-, and ‘user driven’ elements which were identified in the coded online articles. From the conclusions some clear trends emerged enabling the construction of a Venn-type diagram which present insights to how the murder of lesbians (referred to as ‘queercide’ in the article) is being reported on by online news media compared to the contemporary theoretical discussions on how these cases should be reported on.Keywords: journalism, lesbian murder, queercide, technical frames, reporting, online
Procedia PDF Downloads 721273 Determinants of Budget Performance in an Oil-Based Economy
Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi
Abstract:
Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue
Procedia PDF Downloads 1721272 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 891271 Brand Resonance Strategy For Long-term Market Survival: Does The Brand Resonance Matter For Smes? An Investigation In Smes Digital Branding (Facebook, Twitter, Instagram And Blog) Activities And Strong Brand Development
Authors: Noor Hasmini Abd Ghani
Abstract:
Brand resonance is among of new focused strategy that getting more attention in nowadays by larger companies for their long-term market survival. The brand resonance emphasizing of two main characteristics that are intensity and activity able to generate psychology bond and enduring relationship between a brand and consumer. This strong attachment relationship has represented brand resonance with the concept of consumer brand relationship (CBR) that exhibit competitive advantage for long-term market survival. The main consideration toward this brand resonance approach is not only in the context of larger companies but also can be adapted in Small and Medium Enterprises (SMEs) as well. The SMEs have been recognized as vital pillar to the world economy in both developed and emergence countries are undeniable due to their economic growth contributions, such as opportunity for employment, wealth creation, and poverty reduction. In particular, the facts that SMEs in Malaysia are pivotal to the well-being of the Malaysian economy and society are clearly justified, where the SMEs competent in provided jobs to 66% of the workforce and contributed 40% to the GDP. As regards to it several sectors, the SMEs service category that covers the Food & Beverage (F&B) sector is one of the high-potential industries in Malaysia. For that reasons, SMEs strong brand or brand equity is vital to be developed for their long-term market survival. However, there’s still less appropriate strategies in develop their brand equity. The difficulties have never been so evident until Covid-19 swept across the globe from 2020. Since the pandemic began, more than 150,000 SMEs in Malaysia have shut down, leaving more than 1.2 million people jobless. Otherwise, as the SMEs are the pillar of any economy for the countries in the world, and with negative effect of COVID-19 toward their economic growth, thus, their protection has become important more than ever. Therefore, focusing on strategy that able to develop SMEs strong brand is compulsory. Hence, this is where the strategy of brand resonance is introduced in this study. Mainly, this study aims to investigate the impact of CBR as a predictor and mediator in the context of social media marketing (SMM) activities toward SMEs e-brand equity (or strong brand) building. The study employed the quantitative research design concerning on electronic survey method with the valid response rate of 300 respondents. Interestingly, the result revealed the importance role of CBR either as predictor or mediator in the context of SMEs SMM as well as brand equity development. Further, the study provided several theoretical and practical implications that can benefit the SMEs in enhancing their strategic marketing decision.Keywords: SME brand equity, SME social media marketing, SME consumer brand relationship, SME brand resonance
Procedia PDF Downloads 601270 Bulk Amounts of Linear and Cyclic Polypeptides on Our Hand within a Short Time
Abstract:
Polypeptides with defined peptide sequences illustrate the power of remarkable applications in drug delivery, tissue engineering, sensing and catalysis. Especially the cyclic polypeptides, the distinctive topological architecture imparts many characteristic properties comparing to linear polypeptides. Here, a facile and highly efficient strategy for the synthesis of linear and cyclic polypeptides is reported using N-heterocyclic carbenes (NHCs)-mediated ring-opening polymerization (ROP) of α-amino acid N-carboxyanhydrides (NCA) in the presence or absence of primary amine initiator. The polymerization proceeds rapidly in a quasi-living manner, allowing access to linear and cyclic polypeptides of well-defined chain length and narrow polydispersity, as evidenced by nuclear magnetic resonance spectrum (1H NMR and 13C NMR spectra) and size exclusion chromatography (SEC) analysis. The cyclic architecture of the polypeptides was further verified by matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectra (MALDI-TOF MS) and electrospray ionization (ESI) mass spectra, as well as viscosity studies. This approach can also simplify workup procedures and make bulk scale synthesis possible, which thereby opens avenues for practical uses in diverse areas, opening up the new generation of polypeptide synthesis.Keywords: α-amino acid N-carboxyanhydrides, living polymerization, polypeptides, N-heterocyclic carbenes, ring-opening polymerization
Procedia PDF Downloads 1671269 The Relationship between the Competence Perception of Student and Graduate Nurses and Their Autonomy and Critical Thinking Disposition
Authors: Zülfiye Bıkmaz, Aytolan Yıldırım
Abstract:
This study was planned as a descriptive regressive study in order to determine the relationship between the competency levels of working nurses, the levels of competency expected by nursing students, the critical thinking disposition of nurses, their perceived autonomy levels, and certain socio demographic characteristics. It is also a methodological study with regard to the intercultural adaptation of the Nursing Competence Scale (NCS) in both working and student samples. The sample of the study group of nurses at a university hospital for at least 6 months working properly and consists of 443 people filled out questionnaires. The student group, consisting of 543 individuals from the 4 public university nursing 3rd and 4th grade students. Data collection tools consisted of a questionnaire prepared in order to define the socio demographic, economic, and personal characteristics of the participants, the ‘Nursing Competency Scale’, the ‘Autonomy Subscale of the Sociotropy – Autonomy Scale’, and the ‘California Critical Thinking Disposition Inventory’. In data evaluation, descriptive statistics, nonparametric tests, Rasch analysis and correlation and regression tests were used. The language validity of the ‘NCS’ was performed by translation and back translation, and the context validity of the scale was performed with expert views. The scale, which was formed into its final structure, was applied in a pilot application from a group consisting of graduate and student nurses. The time constancy of the test was obtained by analysis testing retesting method. In order to reduce the time problems with the two half reliability method was used. The Cronbach Alfa coefficient of the scale was found to be 0.980 for the nurse group and 0.986 for the student group. Statistically meaningful relationships between competence and critical thinking and variables such as age, gender, marital status, family structure, having had critical thinking training, education level, class of the students, service worked in, employment style and position, and employment duration were found. Statistically meaningful relationships between autonomy and certain variables of the student group such as year, employment status, decision making style regarding self, total duration of employment, employment style, and education status were found. As a result, it was determined that the NCS which was adapted interculturally was a valid and reliable measurement tool and was found to be associated with autonomy and critical thinking.Keywords: nurse, nursing student, competence, autonomy, critical thinking, Rasch analysis
Procedia PDF Downloads 3931268 Behaviour of Hollow Tubes Filled with Sand Slag Concrete
Authors: Meriem Senani, Noureedine Ferhoune
Abstract:
This paper presents the axial bearing capacity of thin welded rectangular steel stubs filled with concrete sand. A series of tests was conducted to study the behavior of short composite columns under axial compressive load, the cross section dimensions were: 100x70x2 mm. A total of 16 stubs have been tested, as follows: 4 filled with ordinary concrete appointed by BO columns, 6 filled with concrete witch natural sand was completely substitute a crystallized sand slag designated in this paper by BSI, and 6 others were tucked in concrete whose natural sand was partially replace by a crystallized sand slag called by BSII. The main objectives of these tests were to clarify the steel specimen's performance filled by concrete sand compared to those filled with ordinary concrete. The main parameters studied are: The height of the specimen (300mm-500mm), eccentricity of load and type of filling concrete. Based on test results obtained, it is confirmed that the length of the tubes, has a considerable effect on the bearing capacity and the failure mode. In all test tubes, fracture occurred by the convex warping of the largest, followed by the smallest due to the outward thrust of the concrete, it was observed that the sand concrete improves the bearing capacity of tubes compounds compared to those filled with ordinary concrete.Keywords: concrete sand, crystallized slag, failure mode, buckling
Procedia PDF Downloads 4141267 Anti-Fire Group 'Peduli Api': Case Study of Mitigating the Fire Hazard Impact and Climate Policy Enhancement on Riau Province Indonesia
Authors: Bayu Rizky Pratama, Hardiansyah Nur Sahaya
Abstract:
Riau Province is the worst emitter for forest burning which causes the huge scale of externality such as declining of forest habitat, health disease, and climate change impact. Indonesia forum of budget transparency for Riau Province (FITRA) reported the length of forest burning reached about 186.069 hectares which is 7,13% of total national forest burning disaster, consisted of 107.000 hectares of peatland and the rest 79.069 hectares of mineral land. Anti-fire group, a voluntary group next to the forest, to help in protecting the forest burning and heavily smoke residual has been established but unfortunately the implementation still far from expectation. This research will emphasize on (1) how the anti-fire group contribute to fire hazard tackling; (2) the identification of SWOT analysis to enhance the group benefit; and (3) government policy implication to maximize the role of Anti-fire group and reduce the case of forest burning as well as heavily smoke which can raise climate change impact. As the observation found some weakness from SWOT identification such as (1) lack of education and training; (2) facility in extinguishing the fire damage; (3) law for economic incentive; (4) communication and field experience; (5) also the reporting the fire case.Keywords: anti-fire group, forest burning impact, SWOT, climate change mitigation
Procedia PDF Downloads 3881266 Tracing the Developmental Repertoire of the Progressive: Evidence from L2 Construction Learning
Abstract:
Research investigating language acquisition from a constructionist perspective has demonstrated that language is learned as constructions at various linguistic levels, which is related to factors of frequency, semantic prototypicality, and form-meaning contingency. However, previous research on construction learning tended to focus on clause-level constructions such as verb argument constructions but few attempts were made to study morpheme-level constructions such as the progressive construction, which is regarded as a source of acquisition problems for English learners from diverse L1 backgrounds, especially for those whose L1 do not have an equivalent construction such as German and Chinese. To trace the developmental trajectory of Chinese EFL learners’ use of the progressive with respect to verb frequency, verb-progressive contingency, and verbal prototypicality and generality, a learner corpus consisting of three sub-corpora representing three different English proficiency levels was extracted from the Chinese Learners of English Corpora (CLEC). As the reference point, a native speakers’ corpus extracted from the Louvain Corpus of Native English Essays was also established. All the texts were annotated with C7 tagset by part-of-speech tagging software. After annotation all valid progressive hits were retrieved with AntConc 3.4.3 followed by a manual check. Frequency-related data showed that from the lowest to the highest proficiency level, (1) the type token ratio increased steadily from 23.5% to 35.6%, getting closer to 36.4% in the native speakers’ corpus, indicating a wider use of verbs in the progressive; (2) the normalized entropy value rose from 0.776 to 0.876, working towards the target score of 0.886 in native speakers’ corpus, revealing that upper-intermediate learners exhibited a more even distribution and more productive use of verbs in the progressive; (3) activity verbs (i.e., verbs with prototypical progressive meanings like running and singing) dropped from 59% to 34% but non-prototypical verbs such as state verbs (e.g., being and living) and achievement verbs (e.g., dying and finishing) were increasingly used in the progressive. Apart from raw frequency analyses, collostructional analyses were conducted to quantify verb-progressive contingency and to determine what verbs were distinctively associated with the progressive construction. Results were in line with raw frequency findings, which showed that contingency between the progressive and non-prototypical verbs represented by light verbs (e.g., going, doing, making, and coming) increased as English proficiency proceeded. These findings altogether suggested that beginning Chinese EFL learners were less productive in using the progressive construction: they were constrained by a small set of verbs which had concrete and typical progressive meanings (e.g., the activity verbs). But with English proficiency increasing, their use of the progressive began to spread to marginal members such as the light verbs.Keywords: Construction learning, Corpus-based, Progressives, Prototype
Procedia PDF Downloads 1281265 Utilization of Sugar Factory Waste as an Organic Fertilizer on Growth and Production of Baby Corn
Authors: Marliana S. Palad
Abstract:
The research purpose is to view and know the influence of giving blotong against growth and production of baby corn. The research was arranged as a factorial experiment in completely randomized block design (RBD) with three replications. The first is fertilizer type: blotong (B1), blotong+EM4 (B2) and bokashi blotong (B3), while of the blotong dose assigned as the second factor: blotong 5 ton ha -1 (D1), blotong 10 ton ha-1 (D2) and blotong 15 ton ha-1 (D3). The research result indicated that bokashi blotong gives the best influence compare to blotong+EM4 against all parameters. Interaction between fertilizers does 10 ton ha-1 to the bokashi. Blotong gives the best influence to the baby corn production 4.41 ton ha-1, bokasi blotong best anyway influence on baby corn vegetative growth, that is: plant height 113.00 cm, leaves number 8 (eight) pieces and stem diameter 6.02 cm. Results of analysis of variance showed that giving of bokashi blotong (B3) showed a better effect on the growth and production of baby corn and highly significant for plant height age of 60 days after planting, leaf number aged 60 days after planting, cob length cornhusk and without cornhusk, diameter stems and cobs, cob weight with cornhusk and without cornhusk and production are converted into ton ha-1. This is due to bokashi blotong has organic content of C, N, P, and K totalling more than the maximum treatment blotong (B1) and the blotong+EM4 (B2). Based on the research result, it can be summarised that sugar factory waste called blotong can be used to make bokashi as organic fertilizer, so the baby corn can growth and production better.Keywords: blotong, bokashi, organic fertilizer, sugar factory waste
Procedia PDF Downloads 3941264 Aerodynamic Study of an Open Window Moving Bus with Passengers
Authors: Pawan Kumar Pant, Bhanu Gupta, S. R. Kale, S. V. Veeravalli
Abstract:
In many countries, buses are the principal means of transport, of which a majority are naturally ventilated with open windows. The design of this ventilation has little scientific basis and to address this problem a study has been undertaken involving both experiments and numerical simulations. The flow pattern inside and around of an open window bus with passengers has been investigated in detail. A full scale three-dimensional numerical simulation has been used for a) a bus with closed windows and b) with open windows. In either simulation, the bus had 58 seated passengers. The bus dimensions used were 2500 mm wide × 2500 mm high (exterior) × 10500 mm long and its speed was set at 40 km/h. In both cases, the flow separates at the top front edge forming a vortex and reattaches close to the mid-length. This attached flow separates once more as it leaves the bus. However, the strength and shape of the vortices at the top front and wake region is different for both cases. The streamline pattern around the bus is also different for the two cases. For the bus with open windows, the dominant airflow inside the bus is from the rear to the front of the bus and air velocity at the face level of the passengers was found to be 1/10th of the free stream velocity. These findings are in good agreement with flow visualization experiments performed in a water channel at 10 m/s, and with smoke/tuft visualizations in a wind tunnel with a free-stream velocity of approximately 40 km/h on a 1:25 scaled Perspex model.Keywords: air flow, moving bus, open windows, vortex, wind tunnel
Procedia PDF Downloads 2331263 Osteometry of the Long Bones of Adult Chinkara (Gazella bennettii): A Remarkable Example of Sexual Dimorphism
Authors: Salahud Din, Saima Masood, Hafsa Zaneb, Saima Ashraf, Imad Khan
Abstract:
The objective of this study was 1) to measure osteometric parameters of the long bones of the adult Chinkara to obtain baseline data 2) to study sexual dimorphism in the adult Chinkara through osteometry and 3) to estimate body weight from the measurements of greatest length and shaft of the long bones. For this purpose, after taking body measurements of adult Chinkara after mortality, the carcass of adult Chinkara of known sex and age were buried in the locality of the Manglot Wildlife Park and Ungulate Breeding Centre, Nizampur, Pakistan; after a specific period of time, the bones were unearthed. Various osteometric parameters of the humerus, radius, metacarpus, femur, tibia and metatarsal were measured through the digital calliper. Statistically significant (P < 0.05), differences in some of the osteometrical parameters between male and female adult Chinkara were observed. Sexual dimorphism exit between the long bones of male and female adult Chinkara. In both male and female Chinkara value obtained for the estimated body weight from humeral, metacarpal and metatarsal measurements were near to the actual body weight of the adult Chinkara. In conclusion, the present study estimates preliminary data on long bones osteometrics and suggests that the morphometric details of the male and female adult Chinkara have differed morphometrically from each other.Keywords: body mass measurements, Chinkara, long bones, morphometric, sexual dimorphism
Procedia PDF Downloads 1311262 A New Approach for Solving Fractional Coupled Pdes
Authors: Prashant Pandey
Abstract:
In the present article, an effective Laguerre collocation method is used to obtain the approximate solution of a system of coupled fractional-order non-linear reaction-advection-diffusion equation with prescribed initial and boundary conditions. In the proposed scheme, Laguerre polynomials are used together with an operational matrix and collocation method to obtain approximate solutions of the coupled system, so that our proposed model is converted into a system of algebraic equations which can be solved employing the Newton method. The solution profiles of the coupled system are presented graphically for different particular cases. The salient feature of the present article is finding the stability analysis of the proposed method and also the demonstration of the lower variation of solute concentrations with respect to the column length in the fractional-order system compared to the integer-order system. To show the higher efficiency, reliability, and accuracy of the proposed scheme, a comparison between the numerical results of Burger’s coupled system and its existing analytical result is reported. There are high compatibility and consistency between the approximate solution and its exact solution to a higher order of accuracy. The exhibition of error analysis for each case through tables and graphs confirms the super-linearly convergence rate of the proposed method.Keywords: fractional coupled PDE, stability and convergence analysis, diffusion equation, Laguerre polynomials, spectral method
Procedia PDF Downloads 1451261 Longitudinal Vibration of a Micro-Beam in a Micro-Scale Fluid Media
Authors: M. Ghanbari, S. Hossainpour, G. Rezazadeh
Abstract:
In this paper, longitudinal vibration of a micro-beam in micro-scale fluid media has been investigated. The proposed mathematical model for this study is made up of a micro-beam and a micro-plate at its free end. An AC voltage is applied to the pair of piezoelectric layers on the upper and lower surfaces of the micro-beam in order to actuate it longitudinally. The whole structure is bounded between two fixed plates on its upper and lower surfaces. The micro-gap between the structure and the fixed plates is filled with fluid. Fluids behave differently in micro-scale than macro, so the fluid field in the gap has been modeled based on micro-polar theory. The coupled governing equations of motion of the micro-beam and the micro-scale fluid field have been derived. Due to having non-homogenous boundary conditions, derived equations have been transformed to an enhanced form with homogenous boundary conditions. Using Galerkin-based reduced order model, the enhanced equations have been discretized over the beam and fluid domains and solve simultaneously in order to obtain force response of the micro-beam. Effects of micro-polar parameters of the fluid as characteristic length scale, coupling parameter and surface parameter on the response of the micro-beam have been studied.Keywords: micro-polar theory, Galerkin method, MEMS, micro-fluid
Procedia PDF Downloads 1841260 Studies on Interaction between Anionic Polymer Sodium Carboxymethylcellulose with Cationic Gemini Surfactants
Authors: M. Kamil, Rahber Husain Khan
Abstract:
In the present study, the Interaction of anionic polymer, sodium carboxymethylcellulose (NaCMC), with cationic gemini surfactants 2,2[(oxybis(ethane-1,2-diyl))bis(oxy)]bis(N-hexadecyl1-N,N-[di(E2)/tri(E3)]methyl1-2-oxoethanaminium)chloride (16-E2-16 and 16-E3-16) and conventional surfactant (CTAC) in aqueous solutions have been studied by surface tension measurement of binary mixtures (0.0- 0.5 wt% NaCMC and 1 mM gemini surfactant/10 mM CTAC solution). Surface tension measurements were used to determine critical aggregation concentration (CAC) and critical micelle concentration (CMC). The maximum surface excess concentration (Ґmax) at the air-water interface was evaluated by the Gibbs adsorption equation. The minimum area per surfactant molecule was evaluated, which indicates the surfactant-polymer Interaction in a mixed system. The effect of changing surfactant chain length on CAC and CMC values of mixed polymer-surfactant systems was examined. From the results, it was found that the gemini surfactant interacts strongly with NaCMC as compared to its corresponding monomeric counterpart CTAC. In these systems, electrostatic interactions predominate. The lowering of surface tension with an increase in the concentration of surfactants is higher in the case of gemini surfactants almost 10-15 times. The measurements indicated that the Interaction between NaCMC-CTAC resulted in complex formation. The volume of coacervate increases with an increase in CTAC concentration; however, above 0.1 wt. % concentration coacervate vanishes.Keywords: anionic polymer, gemni surfactants, tensiometer, CMC, interaction
Procedia PDF Downloads 891259 Analysis of Bed Load Sediment Transport Mataram-Babarsari Irrigation Canal
Authors: Agatha Padma Laksitaningtyas, Sumiyati Gunawan
Abstract:
Mataram Irrigation Canal has 31,2 km length, is the main irrigation canal in Special Region Province of Yogyakarta, connecting Progo River on the west side and Opak River on the east side. It has an important role as the main water carrier distribution for various purposes such as agriculture, fishery, and plantation which should be free from sediment material. Bed Load Sediment is the basic sediment that will make the sediment process on the irrigation canal. Sediment process is a simultaneous event that can make deposition sediment at the base of irrigation canal and can make the height of elevation water change, it will affect the availability of water to be used for irrigation functions. To predict the amount of drowning sediments in the irrigation canal using two methods: Meyer-Peter and Muller’s Method which is an energy approach method and Einstein Method which is a probabilistic approach. Speed measurement using floating method and using current meters. The channel geometry is measured directly in the field. The basic sediment of the channel is taken in the field by taking three samples from three different points. The result of the research shows that by using the formula Meyer -Peter Muller get the result of 60,75799 kg/s, whereas with Einsten’s Method get result of 13,06461 kg/s. the results may serve as a reference for dredging the sediments on the channel so as not to disrupt the flow of water in irrigation canal.Keywords: bed load, sediment, irrigation, Mataram canal
Procedia PDF Downloads 2281258 An Approach to Integrated Water Resources Management, a Plan for Action to Climate Change in India
Authors: H. K. Ramaraju
Abstract:
World is in deep trouble and deeper denial. Worse, the denial is now entirely on the side of action. It is well accepted that climate change is a reality. Scientists say we need to cap temperature increases at 2°C to avoid catastrophe, which means capping emissions at 450 ppm .We know global average temperatures have already increased by 0.8°C and there is enough green house gas in the atmosphere to lead to another 0.8°C increase. There is still a window of opportunity, a tiny one, to tackle the crisis. But where is the action? In the 1990’s, when the world did even not understand, let alone accept, the crises, it was more willing to move to tackle climate change. Today we are in reverse in gear. The rich world has realized it is easy to talk big, but tough to take steps to actually reduce emissions. The agreement was that these countries would reduce so that the developing World could increase. Instead, between 1990 and 2006, their carbon dioxide emissions increased by a whopping 14.5 percent, even green countries of Europe are unable to match words with action. Stop deforestation and take a 20 percent advantage in our carbon balance sheet, with out doing anything at home called REDD (reducing emissions from deforestation and forest degradation) and push for carbon capture and storage (CCS) technologies. There are warning signs elsewhere and they need to be read correctly and acted up on , if not the cases like flood –act of nature or manmade disaster. The full length paper orient in proper understanding of the issues and identifying the most appropriate course of action.Keywords: catastrophe, deforestation, emissions, waste water
Procedia PDF Downloads 287