Search results for: initial input
4557 Energy Use and Econometric Models of Soybean Production in Mazandaran Province of Iran
Authors: Majid AghaAlikhani, Mostafa Hojati, Saeid Satari-Yuzbashkandi
Abstract:
This paper studies energy use patterns and relationship between energy input and yield for soybean (Glycine max (L.) Merrill) in Mazandaran province of Iran. In this study, data were collected by administering a questionnaire in face-to-face interviews. Results revealed that the highest share of energy consumption belongs to chemical fertilizers (29.29%) followed by diesel (23.42%) and electricity (22.80%). Our investigations showed that a total energy input of 23404.1 MJ.ha-1 was consumed for soybean production. The energy productivity, specific energy, and net energy values were estimated as 0.12 kg MJ-1, 8.03 MJ kg-1, and 49412.71 MJ.ha-1, respectively. The ratio of energy outputs to energy inputs was 3.11. Obtained results indicated that direct, indirect, renewable and non-renewable energies were (56.83%), (43.17%), (15.78%) and (84.22%), respectively. Three econometric models were also developed to estimate the impact of energy inputs on yield. The results of econometric models revealed that impact of chemical, fertilizer, and water on yield were significant at 1% probability level. Also, direct and non-renewable energies were found to be rather high. Cost analysis revealed that total cost of soybean production per ha was around 518.43$. Accordingly, the benefit-cost ratio was estimated as 2.58. The energy use efficiency in soybean production was found as 3.11. This reveals that the inputs used in soybean production are used efficiently. However, due to higher rate of nitrogen fertilizer consumption, sustainable agriculture should be extended and extension staff could be proposed substitution of chemical fertilizer by biological fertilizer or green manure.Keywords: Cobbe Douglas function, economical analysis, energy efficiency, energy use patterns, soybean
Procedia PDF Downloads 3344556 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings
Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch
Abstract:
It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry
Procedia PDF Downloads 1684555 Evaluation of the Safety Status of Beef Meat During Processing at Slaughterhouse in Bouira, Algeria
Authors: A. Ameur Ameur, H. Boukherrouba
Abstract:
In red meat slaughterhouses a significant number of organs and carcasses were seized because of the presence of lesions of various origins. The objective of this study is to characterize and evaluate the frequency of these lesions in the slaughterhouse of the Wilaya of BOUIRA. On cattle slaughtered in 2646 and inspected 72% of these carcasses have been no seizures against 28% who have undergone at least one entry. 325 lung (44%), 164 livers (22%), 149 hearts (21%) are the main saisis.38 kidneys members (5%), 33 breasts (4%) and 16 whole carcasses (2%) are less seizures parties. The main reasons are the input hydatid cyst for most seized organs such as the lungs (64.5%), livers (51.8%), hearts (23.2%), hydronephrosis for the kidneys (39.4%), and chronic mastitis (54%) for the breasts. Then we recorded second-degree pneumonia (16%) to the lungs, chronic fascioliasis (25%) for livers. A significant difference was observed (p < 0.0001) by sex, race, origin and age of all cattle having been saisie.une a specific input patterns and So pathology was recorded based on race. The local breed presented (75.2%) of hydatid cyst, (95%) and chronic fascioliasis (60%) pyelonephritis, for against the improved breed presented the entire respiratory lesions include pneumonia (64%) the chronic tuberculosis (64%) and mastitis (76%). These results are an important step in the implementation of the concept of risk assessment as the scientific basis of food legislation, by the identification and characterization of macroscopic damage leading withdrawals in meat and to establish the level of inclusion of these injuries within the recommended risk assessment systems (HACCP).Keywords: slaughterhouses, meat safety, seizure patterns, HACCP
Procedia PDF Downloads 4654554 Chatbots as Language Teaching Tools for L2 English Learners
Authors: Feiying Wu
Abstract:
Chatbots are computer programs that attempt to engage a human in a dialogue, which originated in the 1960s with MIT's Eliza. However, they have become widespread more recently as advances in language technology have produced chatbots with increasing linguistic quality and sophistication, leading to their potential to serve as a tool for Computer-Assisted Language Learning(CALL). The aim of this article is to assess the feasibility of using two chatbots, Mitsuku and CleverBot, as pedagogical tools for learning English as a second language by stimulating L2 learners with distinct English proficiencies. Speaking of the input of stimulated learners, they are measured by AntWordProfiler to match the user's expected vocabulary proficiency. Totally, there are four chat sessions as each chatbot will converse with both beginners and advanced learners. For evaluation, it focuses on chatbots' responses from a linguistic standpoint, encompassing vocabulary and sentence levels. The vocabulary level is determined by the vocabulary range and the reaction to misspelled words. Grammatical accuracy and responsiveness to poorly formed sentences are assessed for the sentence level. In addition, the assessment of this essay sets 25% lexical and grammatical incorrect input to determine chatbots' corrective ability towards different linguistic forms. Based on statistical evidence and illustration of examples, despite the small sample size, neither Mitsuku nor CleverBot is ideal as educational tools based on their performance through word range, grammatical accuracy, topic range, and corrective feedback for incorrect words and sentences, but rather as a conversational tool for beginners of L2 English.Keywords: chatbots, CALL, L2, corrective feedback
Procedia PDF Downloads 784553 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation
Procedia PDF Downloads 704552 A Kinetic Study on Recovery of High-Purity Rutile TiO₂ Nanoparticles from Titanium Slag Using Sulfuric Acid under Sonochemical Procedure
Authors: Alireza Bahramian
Abstract:
High-purity TiO₂ nanoparticles (NPs) with size ranging between 50 nm and 100 nm are synthesized from titanium slag through sulphate route under sonochemical procedure. The effect of dissolution parameters such as the sulfuric acid/slag weight ratio, caustic soda concentration, digestion temperature and time, and initial particle size of the dried slag on the extraction efficiency of TiO₂ and removal of iron are examined. By optimizing the digestion conditions, a rutile TiO₂ powder with surface area of 42 m²/g and mean pore diameter of 22.4 nm were prepared. A thermo-kinetic analysis showed that the digestion temperature has an important effect, while the acid/slag weight ratio and initial size of the slag has a moderate effect on the dissolution rate. The shrinking-core model including both chemical surface reaction and surface diffusion is used to describe the leaching process. A low value of activation energy, 38.12 kJ/mol, indicates the surface chemical reaction model is a rate-controlling step. The kinetic analysis suggested a first order reaction mechanism with respect to the acid concentrations.Keywords: TiO₂ nanoparticles, titanium slag, dissolution rate, sonochemical method, thermo-kinetic study
Procedia PDF Downloads 2544551 Influence of P-Y Curves on Buckling Capacity of Pile Foundation
Authors: Praveen Huded, Suresh Dash
Abstract:
Pile foundations are one of the most preferred deep foundation system for high rise or heavily loaded structures. In many instances, the failure of the pile founded structures in liquefiable soils had been observed even in many recent earthquakes. Recent centrifuge and shake table experiments on two layered soil system have credibly shown that failure of pile foundation can occur because of buckling, as the pile behaves as an unsupported slender structural element once the surrounding soil liquefies. However the buckling capacity depends on largely on the depth of soil liquefied and its residual strength. Hence it is essential to check the pile against the possible buckling failure. Beam on non-linear Winkler Foundation is one of the efficient method to model the pile-soil behavior in liquefiable soil. The pile-soil interaction is modelled through p-y springs, different author have proposed different types of p-y curves for the liquefiable soil. In the present paper the influence two such p-y curves on the buckling capacity of pile foundation is studied considering initial geometric and non-linear behavior of pile foundation. The proposed method is validated against experimental results. Significant difference in the buckling capacity is observed for the two p-y curves used in the analysis. A parametric study is conducted to understand the influence of pile diameter, pile flexural rigidity, different initial geometric imperfections, and different soil relative densities on buckling capacity of pile foundation.Keywords: Pile foundation , Liquefaction, Buckling load, non-linear py curve, Opensees
Procedia PDF Downloads 1644550 OptiBaha: Design of a Web Based Analytical Tool for Enhancing Quality of Education at AlBaha University
Authors: Nadeem Hassan, Farooq Ahmad
Abstract:
The quality of education has a direct impact on individual, family, society, economy in general and the mankind as a whole. Because of that thousands of research papers and articles are written on the quality of education, billions of dollars are spent and continuously being spent on research and enhancing the quality of education. Academic programs accredited agencies define the various criterion of quality of education; academic institutions obtain accreditation from these agencies to ensure degree programs offered at their institution are of international standards. This R&D aims to build a web based analytical tool (OptiBaha) that finds the gaps in AlBaha University education system by taking input from stakeholders, including students, faculty, staff and management. The input/online-data collected by this tool will be analyzed on core areas of education as proposed by accredited agencies, CAC of ABET and NCAAA of KSA, including student background, language, culture, motivation, curriculum, teaching methodology, assessment and evaluation, performance and progress, facilities, availability of teaching materials, faculty qualification, monitoring, policies and procedures, and more. Based on different analytical reports, gaps will be highlighted, and remedial actions will be proposed. If the tool is implemented and made available through a continuous process the quality of education at AlBaha University can be enhanced, it will also help in fulfilling criterion of accreditation agencies. The tool will be generic in nature and ultimately can be used by any academic institution.Keywords: academic quality, accreditation agencies, higher education, policies and procedures
Procedia PDF Downloads 3014549 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model
Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim
Abstract:
Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph
Procedia PDF Downloads 2804548 Toward Green Infrastructure Development: Dispute Prevention Mechanisms along the Belt and Road and Beyond
Authors: Shahla Ali
Abstract:
In the context of promoting green infrastructure development, new opportunities are emerging to re-examine sustainable development practices. This paper presents an initial exploration of the development of community-investor dispute prevention and facilitation mechanisms in the context of the Belt and Road Initiative (BRI) spanning Asia, Africa, and Europe. Given the widescale impact of China’s multi-jurisdictional development initiative, learning how to coordinate with local communities is vital to realizing inclusive and sustainable growth. In the 20 years since the development of the first multilateral community-investor dispute resolution mechanism developed by the International Finance Centre/World Bank, much has been learned about public facilitation, community engagement, and dispute prevention during the early stages of major infrastructure development programs. This paper will explore initial findings as they relate to initiatives underway along the BRI within the Asian Infrastructure Investment Bank and the Asian Development Bank. Given the borderless nature of sustainability concerns, insights from diverse regions are critical to deepening insights into best practices. Drawing on a case-based methodology, this paper will explore the achievements, challenges, and lessons learned in community-investor dispute prevention and resolution for major infrastructure projects in the greater China region.Keywords: law and development, dispute prevention, sustainable development, mitigation
Procedia PDF Downloads 1064547 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil
Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes
Abstract:
Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey
Procedia PDF Downloads 1734546 Study on Compressive Strength and Setting Time of Fly Ash Concrete after Slump Recovery Using Superplasticizer
Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert
Abstract:
Fresh concrete that is on bound to be rejected due to belated use either from delay construction process or unflavored traffic cause delay on concrete delivering can recover the slump and use once again by introduce second dose of superplasticizer(naphthalene based type F) into system. By adding superplasticizer as solution for recover unusable slump loss concrete may affects other concrete properties. Therefore, this paper was observed setting time and compressive strength of concrete after being re-dose with chemical admixture type F (superplasticizer, naphthalene based) for slump recovery. The concrete used in this study was fly ash concrete with fly ash replacement of 0%, 30% and 50% respectively. Concrete mix designed for test specimen was prepared with paste content (ratio of volume of cement to volume of void in the aggregate) of 1.2 and 1.3, water-to-binder ratio (w/b) range of 0.3 to 0.58, initial dose of superplasticizer (SP) range from 0.5 to 1.6%. The setting time of concrete were tested both before and after re-dosed with different amount of second dose and time of dosing. The research was concluded that addition of second dose of superplasticizer would increase both initial and final setting times accordingly to dosage of addition. As for fly ash concrete, the prolongation effect was higher as the replacement of fly ash is increase. The prolongation effect can reach up to maximum about 4 hours. In case of compressive strength, the re-dosed concrete has strength fluctuation within acceptable range of ±10%.Keywords: compressive strength, fly ash concrete, second dose of superplasticizer, setting times
Procedia PDF Downloads 2814545 Technological Innovation and Efficiency of Production of the Greek Aquaculture Industry
Authors: C. Nathanailides, S. Anastasiou, A. Dimitroglou, P. Logothetis, G. Kanlis
Abstract:
In the present work we reviewed historical data of the Greek Marine aquaculture industry including adoption of new methods and technological innovation. The results indicate that the industry exhibited a rapid rise in production efficiency, employment and adoption of new technologies which reduced outbreaks of diseases, reduced production risk and the price of the farmed fish. The improvements of total quality practices and technological input on the Greek Aquaculture industry include improved survival, growth and body shape of farmed fish, which resulted from development of new aquaculture feeds and the genetic selection of the bloodstock. Also improvements in the quality of the final product were achieved via technological input in the methods and technology applied during harvesting, packaging, and transportation-preservation of farmed fish ensuring high quality of the product from the fish farm to the plate of the consumers. These parameters (health management, nutrition, genetics, harvesting and post-harvesting methods and technology) changed significantly over the last twenty years and the results of these improvements are reflected in the production efficiency of the Aquaculture industry and the quality of the final product. It is concluded that the Greek aquaculture industry exhibited a rapid growth, adoption of technologies and supply was stabilized after the global financial crisis, nevertheless, the development of the Greek aquaculture industry is currently limited by international trade sanctions, credit crunch, and increased taxation and not by limited technology or resources.Keywords: innovation, aquaculture, total quality, management
Procedia PDF Downloads 3724544 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 1604543 A Comparison between Bèi Passives and Yóu Passives in Mandarin Chinese
Authors: Rui-heng Ray Huang
Abstract:
This study compares the syntax and semantics of two kinds of passives in Mandarin Chinese: bèi passives and yóu passives. To express a Chinese equivalent for ‘The thief was taken away by the police,’ either bèi or yóu can be used, as in Xiǎotōu bèi/yóu jǐngchá dàizǒu le. It is shown in this study that bèi passives and yóu passives differ semantically and syntactically. The semantic observations are based on the theta theory, dealing with thematic roles. On the other hand, the syntactic analysis draws heavily upon the generative grammar, looking into thematic structures. The findings of this study are as follows. First, the core semantics of bèi passives is centered on the Patient NP in the subject position. This Patient NP is essentially an Affectee, undergoing the outcome or consequence brought up by the action represented by the predicate. This may explain why in the sentence Wǒde huà bèi/*yóu tā niǔqū le ‘My words have been twisted by him/her,’ only bèi is allowed. This is because the subject NP wǒde huà ‘my words’ suffers a negative consequence. Yóu passives, in contrast, place the semantic focus on the post-yóu NP, which is not an Affectee though. Instead, it plays a role which has to take certain responsibility without being affected in a way like an Affectee. For example, in the sentence Zhèbù diànyǐng yóu/*bèi tā dānrèn dǎoyǎn ‘This film is directed by him/her,’ only the use of yóu is possible because the post-yóu NP tā ‘s/he’ refers to someone in charge, who is not an Affectee, nor is the sentence-initial NP zhèbù diànyǐng ‘this film’. When it comes to the second finding, the syntactic structures of bèi passives and yóu passives differ in that the former involve a two-place predicate while the latter a three-place predicate. The passive morpheme bèi in a case like Xiǎotōu bèi jǐngchá dàizǒu le ‘The thief was taken away by the police’ has been argued by some Chinese syntacticians to be a two-place predicate which selects an Experiencer subject and an Event complement. Under this analysis, the initial NP xiǎotōu ‘the thief’ in the above example is a base-generated subject. This study, however, proposes that yóu passives fall into a three-place unergative structure. In the sentence Xiǎotōu yóu jǐngchá dàizǒu le ‘The thief was taken away by the police,’ the initial NP xiǎotōu ‘the thief’ is a topic which serves as a Patient taken by the verb dàizǒu ‘take away.’ The subject of the sentence is assumed to be an Agent, which is in a null form and may find its reference from the discourse or world knowledge. Regarding the post-yóu NP jǐngchá ‘the police,’ its status is dual. On the one hand, it is a Patient introduced by the light verb yóu; on the other, it is an Agent assigned by the verb dàizǒu ‘take away.’ It is concluded that the findings in this study contribute to better understanding of what makes the distinction between the two kinds of Chinese passives.Keywords: affectee, passive, patient, unergative
Procedia PDF Downloads 2734542 Calculation of the Normalized Difference Vegetation Index and the Spectral Signature of Coffee Crops: Benefits of Image Filtering on Mixed Crops
Authors: Catalina Albornoz, Giacomo Barbieri
Abstract:
Crop monitoring has shown to reduce vulnerability to spreading plagues and pathologies in crops. Remote sensing with Unmanned Aerial Vehicles (UAVs) has made crop monitoring more precise, cost-efficient and accessible. Nowadays, remote monitoring involves calculating maps of vegetation indices by using different software that takes either Truecolor (RGB) or multispectral images as an input. These maps are then used to segment the crop into management zones. Finally, knowing the spectral signature of a crop (the reflected radiation as a function of wavelength) can be used as an input for decision-making and crop characterization. The calculation of vegetation indices using software such as Pix4D has high precision for monoculture plantations. However, this paper shows that using this software on mixed crops may lead to errors resulting in an incorrect segmentation of the field. Within this work, authors propose to filter all the elements different from the main crop before the calculation of vegetation indices and the spectral signature. A filter based on the Sobel method for border detection is used for filtering a coffee crop. Results show that segmentation into management zones changes with respect to the traditional situation in which a filter is not applied. In particular, it is shown how the values of the spectral signature change in up to 17% per spectral band. Future work will quantify the benefits of filtering through the comparison between in situ measurements and the calculated vegetation indices obtained through remote sensing.Keywords: coffee, filtering, mixed crop, precision agriculture, remote sensing, spectral signature
Procedia PDF Downloads 3884541 Natural Frequency Analysis of Spinning Functionally Graded Cylindrical Shells Subjected to Thermal Loads
Authors: Esmaeil Bahmyari
Abstract:
The natural frequency analysis of the functionally graded (FG) rotating cylindrical shells subjected to thermal loads is studied based on the three-dimensional elasticity theory. The temperature-dependent assumption of the material properties is graded in the thickness direction, which varies based on the simple power law distribution. The governing equations and the appropriate boundary conditions, which include the effects of initial thermal stresses, are derived employing Hamilton’s principle. The initial thermo-mechanical stresses are obtained by the thermo-elastic equilibrium equation’s solution. As an efficient and accurate numerical tool, the differential quadrature method (DQM) is adopted to solve the thermo-elastic equilibrium equations, free vibration equations and natural frequencies are obtained. The high accuracy of the method is demonstrated by comparison studies with those existing solutions in the literature. Ultimately, the parametric studies are performed to demonstrate the effects of boundary conditions, temperature rise, material graded index, the thickness-to-length and the aspect ratios for the rotating cylindrical shells on the natural frequency.Keywords: free vibration, DQM, elasticity theory, FG shell, rotating cylindrical shell
Procedia PDF Downloads 844540 Organizational Decision to Adopt Digital Forensics: An Empirical Investigation in the Case of Malaysian Law Enforcement Agencies
Authors: Siti N. I. Mat Kamal, Othman Ibrahim, Mehrbakhsh Nilashi, Jafalizan M. Jali
Abstract:
The use of digital forensics (DF) is nowadays essential for law enforcement agencies to identify analysis and interpret the digital information derived from digital sources. In Malaysia, the engagement of Malaysian Law Enforcement Agencies (MLEA) with this new technology is not evenly distributed. To investigate the factors influencing the adoption of DF in Malaysia law enforcement agencies’ operational environment, this study proposed the initial theoretical framework based on the integration of technology organization environment (TOE), institutional theory, and human organization technology (HOT) fit model. A questionnaire survey was conducted on selected law enforcement agencies in Malaysia to verify the validity of the initial integrated framework. Relative advantage, compatibility, coercive pressure, normative pressure, vendor support and perceived technical competence of technical staff were found as the influential factors on digital forensics adoption. In addition to the only moderator of this study (agency size), any significant moderating effect on the perceived technical competence and the decision to adopt digital forensics by Malaysian law enforcement agencies was found insignificant. Thus, these results indicated that the developed integrated framework provides an effective prediction of the digital forensics adoption by Malaysian law enforcement agencies.Keywords: digital forensics, digital forensics adoption, digital information, law enforcement agency
Procedia PDF Downloads 1514539 The Analysis of Secondary Case Studies as a Starting Point for Grounded Theory Studies: An Example from the Enterprise Software Industry
Authors: Abilio Avila, Orestis Terzidis
Abstract:
A fundamental principle of Grounded Theory (GT) is to prevent the formation of preconceived theories. This implies the need to start a research study with an open mind and to avoid being absorbed by the existing literature. However, to start a new study without an understanding of the research domain and its context can be extremely challenging. This paper presents a research approach that simultaneously supports a researcher to identify and to focus on critical areas of a research project and prevent the formation of prejudiced concepts by the current body of literature. This approach comprises of four stages: Selection of secondary case studies, analysis of secondary case studies, development of an initial conceptual framework, development of an initial interview guide. The analysis of secondary case studies as a starting point for a research project allows a researcher to create a first understanding of a research area based on real-world cases without being influenced by the existing body of theory. It enables a researcher to develop through a structured course of actions a firm guide that establishes a solid starting point for further investigations. Thus, the described approach may have significant implications for GT researchers who aim to start a study within a given research area.Keywords: grounded theory, interview guide, qualitative research, secondary case studies, secondary data analysis
Procedia PDF Downloads 2664538 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 1584537 The Transformative Landscape of the University of the Western Cape’s Elearning Center: Institutionalizing ELearning
Authors: Paul Dankers, Juliet Stoltenkamp, Carolynne Kies
Abstract:
In May 2005, the University of the Western Cape (UWC) established an eLearning Division (ED) that, over the past 18 years, accelerated into the institutionalization of an efficient eLearning Centre. The initial objective of the ED was to incessantly align itself with emerging technologies caused by digital transformation, which progressively impacted Higher Education Institutions (HEIs) globally. In this paper, we present how the UWC eLearning Division (ED) first evolved into the eLearning Development and Support Unit (EDUS), currently called the ‘Centre for Innovative Education and Communication Technologies (CIECT). CIECT was strategically separated from the Department of Information and Communication Services (ICS) in 2009 and repositioned as an independent structure at UWC. Using a comparative research method, we highlight the transformative eLearning landscape at UWC by doing a detailed account of the shift in practices. Our research method will determine the initial vision and outcomes of institutionalizing an eLearning division. The study aims to compare across space or time the eLearning division’s rate of growth. By comparing the progressive growth of the UWCs eLearning division over the years, we will be able to document the successes and achievements of the eLearning division precisely. This study’s outcomes will act as a reference for novel research subjects on formalising eLearning. More research that delves into the effectiveness of having an eLearning division at HEIs in support of students’ teaching and learning is needed.Keywords: eLearning, institutionalization, teaching and learning, transformation
Procedia PDF Downloads 404536 Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques
Authors: Songul Cinaroglu
Abstract:
Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.Keywords: public hospital unions, efficiency, data envelopment analysis, random forest
Procedia PDF Downloads 1264535 Grammar as a Logic of Labeling: A Computer Model
Authors: Jacques Lamarche, Juhani Dickinson
Abstract:
This paper introduces a computational model of a Grammar as Logic of Labeling (GLL), where the lexical primitives of morphosyntax are phonological matrixes, the form of words, understood as labels that apply to realities (or targets) assumed to be outside of grammar altogether. The hypothesis is that even though a lexical label relates to its target arbitrarily, this label in a complex (constituent) label is part of a labeling pattern which, depending on its value (i.e., N, V, Adj, etc.), imposes language-specific restrictions on what it targets outside of grammar (in the world/semantics or in cognitive knowledge). Lexical forms categorized as nouns, verbs, adjectives, etc., are effectively targets of labeling patterns in use. The paper illustrates GLL through a computer model of basic patterns in English NPs. A constituent label is a binary object that encodes: i) alignment of input forms so that labels occurring at different points in time are understood as applying at once; ii) endocentric structuring - every grammatical constituent has a head label that determines the target of the constituent, and a limiter label (the non-head) that restricts this target. The N or A values are restricted to limiter label, the two differing in terms of alignment with a head. Consider the head initial DP ‘the dog’: the label ‘dog’ gets an N value because it is a limiter that is evenly aligned with the head ‘the’, restricting application of the DP. Adapting a traditional analysis of ‘the’ to GLL – apply label to something familiar – the DP targets and identifies one reality familiar to participants by applying to it the label ‘dog’ (singular). Consider next the DP ‘the large dog’: ‘large dog’ is nominal by even alignment with ‘the’, as before, and since ‘dog’ is the head of (head final) ‘large dog’, it is also nominal. The label ‘large’, however, is adjectival by narrow alignment with the head ‘dog’: it doesn’t target the head but targets a property of what dog applies to (a property or value of attribute). In other words, the internal composition of constituents determines that a form targets a property or a reality: ‘large’ and ‘dog’ happen to be valid targets to realize this constituent. In the presentation, the computer model of the analysis derives the 8 possible sequences of grammatical values with three labels after the determiner (the x y z): 1- D [ N [ N N ]]; 2- D [ A [ N N ] ]; 3- D [ N [ A N ] ]; 4- D [ A [ A N ] ]; 5- D [ [ N N ] N ]; 5- D [ [ A N ] N ]; 6- D [ [ N A ] N ] 7- [ [ N A ] N ] 8- D [ [ Adv A ] N ]. This approach that suggests that a computer model of these grammatical patterns could be used to construct ontologies/knowledge using speakers’ judgments about the validity of lexical meaning in grammatical patterns.Keywords: syntactic theory, computational linguistics, logic and grammar, semantics, knowledge and grammar
Procedia PDF Downloads 384534 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions
Authors: M. Tarik Boyraz, M. Bilge Imer
Abstract:
Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.Keywords: heat treatment, IN738LC, simulations, super-alloys
Procedia PDF Downloads 2484533 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images
Authors: Sophia Shi
Abstract:
Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG
Procedia PDF Downloads 1314532 Adsorption of Basic Dyes Using Activated Carbon Prepared from Date Palm Fibre
Authors: Riham Hazzaa , Mohamed Hussien Abd El Megid
Abstract:
Dyes are toxic and cause severe problems to aquatic environment. The use of agricultural solid wastes is considered as low-cost and eco-friendly adsorbents for removing dyes from waste water. Date palm fibre, an abundant agricultural by-product in Egypt was used to prepare activated carbon by physical activation method. This study investigates the use of date palm fiber (DPF) and activated carbon (DPFAC) for the removal of a basic dye, methylene blue (MB) from simulated waste water. The effects of temperature, pH of solution, initial dye (concentration, adsorbent dosage and contact time were studied. The experimental equilibrium adsorption data were analyzed by Langmuir, Freundlich, Temkin, Dubinin, Radushkevich and Harkins–Jura isotherms. Adsorption kinetics data were modeled using the pseudo-first and pseudo-second order and Elvoich equations. The mechanism of the adsorption process was determined from the intraparticle diffusion model. The results revealed that as the initial dye concentration , amount of adsorbent and temperature increased, the percentage of dye removal increased. The optimum pH required for maximum removal was found to be 6. The adsorption of methylene blue dye was better described by the pseudo-second-order equation. Results indicated that DPFAC and DPF could be an alternative for more costly adsorbents used for dye removal.Keywords: adsorption, basic dye, palm fiber, activated carbon
Procedia PDF Downloads 3314531 A Case Study on the Seismic Performance Assessment of the High-Rise Setback Tower Under Multiple Support Excitations on the Basis of TBI Guidelines
Authors: Kamyar Kildashti, Rasoul Mirghaderi
Abstract:
This paper describes the three-dimensional seismic performance assessment of a high-rise steel moment-frame setback tower, designed and detailed per the 2010 ASCE7, under multiple support excitations. The vulnerability analyses are conducted based on nonlinear history analyses under a set of multi-directional strong ground motion records which are scaled to design-based site-specific spectrum in accordance with ASCE41-13. Spatial variation of input motions between far distant supports of each part of the tower is considered by defining time lag. Plastic hinge monotonic and cyclic behavior for prequalified steel connections, panel zones, as well as steel columns is obtained from predefined values presented in TBI Guidelines, PEER/ATC72 and FEMA P440A to include stiffness and strength degradation. Inter-story drift ratios, residual drift ratios, as well as plastic hinge rotation demands under multiple support excitations, are compared to those obtained from uniform support excitations. Performance objectives based on acceptance criteria declared by TBI Guidelines are compared between uniform and multiple support excitations. The results demonstrate that input motion discrepancy results in detrimental effects on the local and global response of the tower.Keywords: high-rise building, nonlinear time history analysis, multiple support excitation, performance-based design
Procedia PDF Downloads 2854530 Adsorption Performance of Hydroxyapatite Powder in the Removal of Dyes in Wastewater
Authors: Aderonke A. Okoya, Oluwaseun A. Somoye, Omotayo S. Amuda, Ifeanyi E. Ofoezie
Abstract:
This study assessed the efficiency of Hydroxyapatite Powder (HAP) in the removal of dyes in wastewater in comparison with Commercial Activated Carbon (CAC). This was with a view to developing cost effective method that could be more environment friendly. The HAP and CAC were used as adsorbent while Indigo dye was used as the adsorbate. The batch adsorption experiment was carried out by varying initial concentrations of the indigo dye, contact time and adsorbent dosage. Adsorption efficiency was classified by adsorption Isotherms using Langmuir, Freundlich and D-R isotherm models. Physicochemical parameters of a textile industry wastewater were determined before and after treatment with the adsorbents. The results from the batch experiments showed that at initial concentration of 125 mg/L of adsorbate in simulated wastewater, 0.9276 ± 0.004618 mg/g and 3.121 ± 0.006928 mg/g of indigo adsorbed per unit time (qt) of HAP and CAC respectively. The ratio of HAP to CAC required for the removal of indigo dye in simulated wastewater was 2:1. The isotherm model of the simulated wastewater fitted well to Freundlich model, the adsorption intensity (1/n) presented 1.399 and 0.564 for HAP and CAC, respectively. This revealed that the HAP had weaker bond than the electrostatic interactions which were present in CAC. The values of some physicochemical parameters (acidity, COD, Cr, Cd) of textile wastewater when treated with HAP decreased. The study concluded that HAP, an environment-friendly adsorbent, could be effectively used to remove dye from textile industrial wastewater with added advantage of being regenerated.Keywords: adsorption isotherm, commercial activated carbon, hydroxyapatite powder, indigo dye, textile wastewater
Procedia PDF Downloads 2424529 Aggregating Buyers and Sellers for E-Commerce: How Demand and Supply Meet in Fairs
Authors: Pierluigi Gallo, Francesco Randazzo, Ignazio Gallo
Abstract:
In recent years, many new and interesting models of successful online business have been developed. Many of these are based on the competition between users, such as online auctions, where the product price is not fixed and tends to rise. Other models, including group-buying, are based on cooperation between users, characterized by a dynamic price of the product that tends to go down. There is not yet a business model in which both sellers and buyers are grouped in order to negotiate on a specific product or service. The present study investigates a new extension of the group-buying model, called fair, which allows aggregation of demand and supply for price optimization, in a cooperative manner. Additionally, our system also aggregates products and destinations for shipping optimization. We introduced the following new relevant input parameters in order to implement a double-side aggregation: (a) price-quantity curves provided by the seller; (b) waiting time, that is, the longer buyers wait, the greater discount they get; (c) payment time, which determines if the buyer pays before, during or after receiving the product; (d) the distance between the place where products are available and the place of shipment, provided in advance by the buyer or dynamically suggested by the system. To analyze the proposed model we implemented a system prototype and a simulator that allows studying effects of changing some input parameters. We analyzed the dynamic price model in fairs having one single seller and a combination of selected sellers. The results are very encouraging and motivate further investigation on this topic.Keywords: auction, aggregation, fair, group buying, social buying
Procedia PDF Downloads 2944528 Effect of Diamagnetic Additives on Defects Level of Soft LiTiZn Ferrite Ceramics
Authors: Andrey V. Malyshev, Anna B. Petrova, Anatoly P. Surzhikov
Abstract:
The article presents the results of the influence of diamagnetic additives on the defects level of ferrite ceramics. For this purpose, we use a previously developed method based on the mathematical analysis of experimental temperature dependences of the initial permeability. A phenomenological expression for the description of such dependence was suggested and an interpretation of its main parameters was given. It was shown, that the main criterion of the integral defects level of ferrite ceramics is the relation of two parameters correlating with elastic stress value in a material. Model samples containing a controlled number of intergranular phase inclusions served to prove the validity of the proposed method, as well as to assess its sensitivity in comparison with the traditional XRD (X-ray diffraction) analysis. The broadening data of diffraction reflexes of model samples have served for such comparison. The defects level data obtained by the proposed method are in good agreement with the X-ray data. The method showed high sensitivity. Therefore, the legitimacy of the selection relationship β/α parameters of phenomenological expression as a characteristic of the elastic state of the ferrite ceramics confirmed. In addition, the obtained data can be used in the detection of non-magnetic phases and testing the optimal sintering production technology of soft magnetic ferrites.Keywords: cure point, initial permeability, integral defects level, homogeneity
Procedia PDF Downloads 134