Search results for: freshwater output
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2257

Search results for: freshwater output

367 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 230
366 Decades of Educational Excellence: Case Studies of Successful Family-Owned Higher Educational Institutions

Authors: Maria Luz Macasinag

Abstract:

This study aims to determine and to examine critically successful family-owned higher educational institutions towards identifying the attributes and practices that may likely have led to their success. This research is confined to private, non-sectarian, family-owned higher institutions of learning that have been operating for more than fifty years, had only one founder and had at least two transitions in terms of generation. The criteria for selecting family-owned universities to be part of the cases under investigation include institutions (1) with increasing enrollment over the past five years, with level III accreditation status, (3) with good performance in the Board examinations in most of its programs and (4) with high employability of graduates. The study uses the multiple case study method. A model based on the cross-case analysis of the attributes and practices of all the case studies of successful family- owned higher institutions of learning is the output. The paper provides insights to current and future school owners and administrators in the management of their institutions for competitiveness, sustainability and advancement. This research encourages the evaluation of how the ideas that may lead to the success of schools owned by families in developing a sense of community, a reciprocal relationship among colleagues, the students and other stakeholders will result to the attainment of the vision and mission of the school. The study is beneficial to entrepreneurs and to business students whose know-how may provide insights that would be helpful in guiding prospective school owners. The commission on higher education and the Department of Education stand to benefit from this academic paper for the guidance that they provide to family-owned educational institutions. Banks and other financial institutions may find valuable ideas from this academic paper for the purpose of providing financial assistance to colleges and universities that are family-owned. Researchers in the field of educational management and administration may be able to extract from this study related topics for future research.

Keywords: administration practices, attributes, family-owned schools, success factors

Procedia PDF Downloads 246
365 Identification of the Antimicrobial Effect of Liquorice Extracts on Gram-Positive Bacteria: Determination of Minimum Inhibitory Concentration and Mechanism of Action Using a luxABCDE Reporter Strain

Authors: Madiha El Awamie, Catherine Rees

Abstract:

Natural preservatives have been used as alternatives to traditional chemical preservatives; however, a limited number have been commercially developed and many remain to be investigated as sources of safer and effective antimicrobials. In this study, we have been investigating the antimicrobial activity of an extract of Glycyrrhiza glabra (liquorice) that was provided as a waste material from the production of liquorice flavourings for the food industry, and to investigate if this retained the expected antimicrobial activity so it could be used as a natural preservative. Antibacterial activity of liquorice extract was screened for evidence of growth inhibition against eight species of Gram-negative and Gram-positive bacteria, including Listeria monocytogenes, Listeria innocua, Staphylococcus aureus, Enterococcus faecalis and Bacillus subtilis. The Gram-negative bacteria tested include Pseudomonas aeruginosa, Escherichia coli and Salmonella typhimurium but none of these were affected by the extract. In contrast, for all of the Gram-positive bacteria tested, growth was inhibited as monitored using optical density. However parallel studies using viable count indicated that the cells were not killed meaning that the extract was bacteriostatic rather than bacteriocidal. The Minimum Inhibitory Concentration [MIC] and Minimum Bactericidal Concentration [MBC] of the extract was also determined and a concentration of 50 µg ml-1 was found to have a strong bacteriostatic effect on Gram-positive bacteria. Microscopic analysis indicated that there were changes in cell shape suggesting the cell wall was affected. In addition, the use of a reporter strain of Listeria transformed with the bioluminescence genes luxABCDE indicated that cell energy levels were reduced when treated with either 12.5 or 50 µg ml-1 of the extract, with the reduction in light output being proportional to the concentration of the extract used. Together these results suggest that the extract is inhibiting the growth of Gram-positive bacteria only by damaging the cell wall and/or membrane.

Keywords: antibacterial activity, bioluminescence, Glycyrrhiza glabra, natural preservative

Procedia PDF Downloads 317
364 Assessment of Advanced Oxidation Process Applicability for Household Appliances Wastewater Treatment

Authors: Pelin Yılmaz Çetiner, Metin Mert İlgün, Nazlı Çetindağ, Emine Birci, Gizemnur Yıldız Uysal, Özcan Hatipoğlu, Ehsan Tuzcuoğlu, Gökhan Sır

Abstract:

Water scarcity is an inevitable problem affecting more and more people day by day. It is a worldwide crisis and a consequence of rapid population growth, urbanization and overexploitation. Thus, the solutions providing the reclamation of the wastewater are the desired approach. Wastewater contains various substances such as organic, soaps and detergents, solvents, biological substances, and inorganic substances. The physical properties of the wastewater differs regarding to its origin such as commerical, domestic or hospital usage. Thus, the treatment strategy of this type of wastewater is should be comprehensively investigated and properly treated. The advanced oxidation process comes up as a hopeful method associated with the formation of reactive hydroxyl radicals that are highly reactive to oxidize of organic pollutants. This process has a priority on other methods such as coagulation, flocuation, sedimentation and filtration since it was not cause any undesirable by-products. In the present study, it was aimed to investigate the applicability of advanced oxidation process for the treatment of household appliances wastewater. For this purpose, the laboratory studies providing the effectively addressing of the formed radicals to organic pollutants were carried out. Then the effect of process parameters were comprehensively studied by using response surface methodology, Box-Benhken experimental desing. The final chemical oxygen demand (COD) was the main output to evaluate the optimum point providing the expected COD removal. The linear alkyl benzene sulfonate (LAS), total dissolved solids (TDS) and color were measured for the optimum point providing the expected COD removal. Finally, present study pointed out that advanced oxidation process might be efficiently preffered to treat of the household appliances wastewater and the optimum process parameters provided that expected removal of COD.

Keywords: advanced oxidation process, household appliances wastewater, modelling, water reuse

Procedia PDF Downloads 39
363 Quantitative Ethno-Botanical Analysis and Conservation Issues of Medicinal Flora from Alpine and Sub-Alpine, Hindukush Region of Pakistan

Authors: Gul Jan

Abstract:

It is the first quantitative ethno-botanical analysis and conservation issues of medicinal flora of Alpine and Sub-alpine, Hindikush region of Pakistan. The objective of the study aims to report, compare the uses and highlight the ethno-Botanical significance of medicinal plants for treatment of various diseases. A total of 250 (242 males and 8 females) local informants including 10 Local Traditional Healers were interviewed. Information was collected through semi-structured interviews, analyzed and compared by quantitative ethno-botanical indices such as Jaccard index (JI), Informant Consensus Factor (ICF), use value (UV) and Relative frequency of citation (RFC).Thorough survey indicated that 57 medicinal plants belongs to 43 families were investigated to treat various illnesses. The highest ICF is recorded for digestive system (0.69%), Circolatory system (0.61%), urinary tract system, (0.53%) and respiratory system (0.52%). Used value indicated that, Achillea mellefolium (UV = 0.68), Aconitum violaceum (UV = 0.69), Valeriana jatamansi (UV = 0.63), Berberis lyceum (UV = 0.65) and are exceedingly medicinal plant species used in the region. In comparison, highest similarity index is recorded in these studies with JI 17.72 followed by 16.41. According to DMR output, Pinus williciana ranked first due to multipurpose uses among all species and was found most threatened with higher market value. Unwise used of natural assets pooled with unsuitable harvesting practices have exaggerated pressure on plant species of the research region. The main issues causative to natural variety loss found were over grazing of animals, forest violation, wild animal hunting, fodder, plant collection as medicine, fuel wood, forest fire, and invasive species negatively affect the natural resources. For viable utilization, in situ and ex situ conservation, skillful collecting, and reforestation project may be the resolution. Further wide field management research is required.

Keywords: quantitative analysis, conservations issues, medicinal flora, alpine and sub-alpine, Hindukush region

Procedia PDF Downloads 281
362 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman electricity Transmission Company

Authors: Rahma Saleh Hussein Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS department. This paper will describe in detail the current GIS data submission process and the journey for developing it. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, and updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) for excavation permits and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting and data alterations has also contributed to reducing the missing attributes and enhance data quality index of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the years 2017 and year 2022. Overall, concluding that by governance, asset information & GIS department can control the GIS data process; collect, properly record, and manage asset data and information within the OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, CMMS

Procedia PDF Downloads 98
361 Modelling and Simulation of Natural Gas-Fired Power Plant Integrated to a CO2 Capture Plant

Authors: Ebuwa Osagie, Chet Biliyok, Yeung Hoi

Abstract:

Regeneration energy requirement and ways to reduce it is the main aim of most CO2 capture researches currently being performed and thus, post-combustion carbon capture (PCC) option is identified to be the most suitable for the natural gas-fired power plants. From current research and development (R&D) activities worldwide, two main areas are being examined in order to reduce the regeneration energy requirement of amine-based PCC, namely: (a) development of new solvents with better overall performance than 30wt% monoethanolamine (MEA) aqueous solution, which is considered as the base-line solvent for solvent-based PCC, (b) Integration of the PCC Plant to the power plant. In scaling-up a PCC pilot plant to the size required for a commercial-scale natural gas-fired power plant, process modelling and simulation is very essential. In this work, an integrated process made up of a 482MWe natural gas-fired power plant, an MEA-based PCC plant which is developed and validated has been modelled and simulated. The PCC plant has four absorber columns and a single stripper column, the modelling and simulation was performed with Aspen Plus® V8.4. The gas turbine, the heat recovery steam generator and the steam cycle were modelled based on a 2010 US DOE report, while the MEA-based PCC plant was modelled as a rate-based process. The scaling of the amine plant was performed using a rate based calculation in preference to the equilibrium based approach for 90% CO2 capture. The power plant was integrated to the PCC plant in three ways: (i) flue gas stream from the power plant which is divided equally into four stream and each stream is fed into one of the four absorbers in the PCC plant. (ii) Steam draw-off from the IP/LP cross-over pipe in the steam cycle of the power plant used to regenerate solvent in the reboiler. (iii) Condensate returns from the reboiler to the power plant. The integration of a PCC plant to the NGCC plant resulted in a reduction of the power plant output by 73.56 MWe and the net efficiency of the integrated system is reduced by 7.3 % point efficiency. A secondary aim of this study is the parametric studies which have been performed to assess the impacts of natural gas on the overall performance of the integrated process and this is achieved through investigation of the capture efficiencies.

Keywords: natural gas-fired, power plant, MEA, CO2 capture, modelling, simulation

Procedia PDF Downloads 416
360 Desulfurization of Crude Oil Using Bacteria

Authors: Namratha Pai, K. Vasantharaj, K. Haribabu

Abstract:

Our Team is developing an innovative cost effective biological technique to desulfurize crude oil. ’Sulphur’ is found to be present in crude oil samples from .05% - 13.95% and its elimination by industrial methods is expensive currently. Materials required :- Alicyclobacillus acidoterrestrius, potato dextrose agar, oxygen, Pyragallol and inert gas(nitrogen). Method adapted and proposed:- 1) Growth of bacteria studied, energy needs. 2) Compatibility with crude-oil. 3) Reaction rate of bacteria studied and optimized. 4) Reaction development by computer simulation. 5) Simulated work tested by building the reactor. The method being developed requires the use of bacteria Alicyclobacillus acidoterrestrius - an acidothermophilic heterotrophic, soil dwelling aerobic, Sulfur bacteria. The bacteria are fed to crude oil in a unique manner. Its coated onto potato dextrose agar beads, cultured for 24 hours (growth time coincides with time when it begins reacting) and fed into the reactor. The beads are to be replenished with O2 by passing them through a jacket around the reactor which has O2 supply. The O2 can’t be supplied directly as crude oil is inflammable, hence the process. Beads are made to move around based on the concept of fluidized bed reactor. By controlling the velocity of inert gas pumped , the beads are made to settle down when exhausted of O2. It is recycled through the jacket where O2 is re-fed and beads which were inside the ring substitute the exhausted ones. Crude-oil is maintained between 1 atm-270 M Pa pressure and 45°C treated with tartaric acid (Ph reason for bacteria growth) for optimum output. Bacteria being of oxidising type react with Sulphur in crude-oil and liberate out SO4^2- and no gas. SO4^2- is absorbed into H2O. NaOH is fed once reaction is complete and beads separated. Crude-oil is thus separated of SO4^2-, thereby Sulphur, tartaric acid and other acids which are separated out. Bio-corrosion is taken care of by internal wall painting (phenolepoxy paints). Earlier methods used included use of Pseudomonas and Rhodococcus species. They were found to be inefficient, time and energy consuming and reduce the fuel value as they fed on skeleton.

Keywords: alicyclobacillus acidoterrestrius, potato dextrose agar, fluidized bed reactor principle, reaction time for bacteria, compatibility with crude oil

Procedia PDF Downloads 291
359 Early Prediction of Diseases in a Cow for Cattle Industry

Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan

Abstract:

In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.

Keywords: IoT, machine learning, health care, dairy cows

Procedia PDF Downloads 32
358 Analysing a Practical Teamwork Assessment for Distance Education Students at an Australian University

Authors: Celeste Lawson

Abstract:

Learning to embrace and value teamwork assessment at a university level is critical for students, as graduates enter a real-world working environment where teamwork is likely to occur virtually. Student disdain for teamwork exercises is an area often overlooked or disregarded by academics. This research explored the implementation of an online teamwork assessment approach at a regional Australian university with a significant cohort of Distance Education students. Students had disliked teamwork for three reasons: it was not relevant to their study, the grading was unfair amongst team members, and managing the task was challenging in a virtual environment. Teamwork assessment was modified so that the task was an authentic task that could occur in real-world practice; team selection was based on the task topic rather than randomly; grading was based on the individual’s contribution to the task, and students were provided virtual team management skills as part of a the assessment. In this way, management of the team became an output of the task itself. Data was gathered over three years from student satisfaction surveys, failure rates, attrition figures, and unsolicited student comments. In one unit where this approach was adopted (Advanced Public Relations), student satisfaction increased from 3.6 (out of 5) in 2012 to 4.6 in 2016, with positive comments made about the teamwork approach. The attrition rate for another unit (Public Relations and the Media) reduced from 20.7% in 2012 to 2.2% in 2015. In 2012, criticism of teamwork assessment made up 50% of negative student feedback in Public Relations and the Media. By 2015, following the successful implementation of the teamwork assessment approach, only 12.5% of negative comments on the student satisfaction survey were critical of teamwork, while 33% of positive comments related to a positive teamwork experience. In 2016, students explicitly nominated teamwork as the best part of this unit. The approach is transferable to other disciplines and was adopted by other academics within the institution with similar results.

Keywords: assessment, distance education, teamwork, virtual

Procedia PDF Downloads 118
357 Brain-Computer Interfaces That Use Electroencephalography

Authors: Arda Ozkurt, Ozlem Bozkurt

Abstract:

Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.

Keywords: BCI, EEG, non-invasive, spatial resolution

Procedia PDF Downloads 47
356 Voluntary Work Monetary Value and Cost-Benefit Analysis with 'Value Audit and Voluntary Investment' Technique: Case Study of Yazd Red Crescent Society Youth Members Voluntary Work in Health and Safety Plan for New Year's Passengers

Authors: Hamed Seddighi Khavidak

Abstract:

Voluntary work has a lot of economic and social benefits for a country, but the economic value is ignored because it is voluntary. The aim of this study is reviewing Monetary Value of Voluntary Work methods and comparing opportunity cost method and replacement cost method both in theory and in practice. Beside monetary value, in this study, we discuss cost-benefit analysis of health and safety plan in the New Year that conducted by young volunteers of Red Crescent society of Iran. Method: We discussed eight methods for monetary value of voluntary work including: Alternative-Employment Wage Approach, Leisure-Adjusted OCA, Volunteer Judgment OCA, Replacement Wage Approach, Volunteer Judgment RWA, Supervisor Judgment RWA, Cost of Counterpart Goods and Services and Beneficiary Judgment. Also, for cost benefit analysis we drew on 'value audit and volunteer investment' (VIVA) technique that is used widely in voluntary organizations like international federation of Red Cross and Red Crescent societies. Findings: In this study, using replacement cost approach, voluntary work by 1034 youth volunteers was valued 938000000 Riyals and using Replacement Wage Approach it was valued 2268713232 Riyals. Moreover, Yazd Red Crescent Society spent 212800000 Riyals on food and other costs for these volunteers. Discussion and conclusion: In this study, using cost benefit analysis method that is Volunteer Investment and Value Audit (VIVA), VIVA rate showed that for every Riyal that the Red Crescent Society invested in the health and safety of New Year's travelers in its volunteer project, four Riyals returned, and using the wage replacement approach, 11 Riyals returned. Therefore, New Year's travelers health and safety project were successful and economically, it was worthwhile for the Red Crescent Society because the output was much bigger than the input costs.

Keywords: voluntary work, monetary value, youth, red crescent society

Procedia PDF Downloads 189
355 AI for Efficient Geothermal Exploration and Utilization

Authors: Velimir "monty" Vesselinov, Trais Kliplhuis, Hope Jasperson

Abstract:

Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.

Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal

Procedia PDF Downloads 14
354 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation

Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst

Abstract:

There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.

Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation

Procedia PDF Downloads 162
353 Effect of Urea Deep Placement Technology Adoption on the Production Frontier: Evidence from Irrigation Rice Farmers in the Northern Region of Ghana

Authors: Shaibu Baanni Azumah, William Adzawla

Abstract:

Rice is an important staple crop, with current demand higher than the domestic supply in Ghana. This has led to a high and unfavourable import bill. Therefore, recent policies and interventions in the agricultural sub-sector aim at promoting various improved agricultural technologies in order to improve domestic production and reduce the importation of rice. In this study, we examined the effect of the adoption of Urea Deep Placement (UDP) technology by rice farmers on the position of the production frontier. This involved 200 farmers selected through a multi stage sampling technique in the Northern region of Ghana. A Cobb-Douglas stochastic frontier model was fitted. The result showed that the adoption of UDP technology shifts the output frontier outward and also move the farmers closer to the frontier. Farmers were also operating under diminishing returns to scale which calls for redress. Other factors that significantly influenced rice production were farm size, labour, use of certified seeds and NPK fertilizer. Although there was an opportunity for improvement, the farmers were highly efficient (92%), compared to previous studies. Farmers’ efficiency was improved through increased education, household size, experience, access to credit, and lack of extension service provision by MoFA. The study recommends the revision of Ghana’s agricultural policy to include the UDP technology. Agricultural Extension officers of the Ministry of Food and Agriculture (MoFA) should be trained on the UDP technology to support IFDC’s drive to improve adoption by rice farmers. Rice farmers are also encouraged to expand their farm lands, improve plant population, and also increase the usage of fertilizer to improve yields. Mechanisms through which credit can be made easily accessible and effectively utilised should be identified and promoted.

Keywords: efficiency, rice farmers, stochastic frontier, UDP technology

Procedia PDF Downloads 387
352 The Inclusion of the Cabbage Waste in Buffalo Ration Made of Sugarcane Waste and Its Effect on Characteristics of the Silage

Authors: Adrizal, Irsan Ryanto, Sri Juwita, Adika Sugara, Tino Bapirco

Abstract:

The objective of the research was to study the influence of the inclusion of the cabbage waste into a buffalo rations made of sugarcane waste on the feed formula and characteristic of complete feed silage. Research carried out a two-stage i.e. the feed formulation and experiment of making complete feed silage. Feed formulation is done by linear programming. Data input is the price of feed stuffs and their nutrient contents as well as requirements for rations, while the output is the use of each feed stuff and the price of complete feed. The experiment of complete feed silage was done by a completely random design 4 x 4. The treatments were 4 inclusion levels of the cabbage waste i.e. 0%,(T1) 5%(T2), 10%(T3) and 15% (T4), with 4 replications. The result of feed formulation for T1 was cabbage (0%), sugarcane top (17.9%), bagasse (33.3%), Molasses (5.0%), cabagge (0%), Thitonia sp (10.0%), rice brand (2.7%), palm kernel cake (20.0%), corn meal (9.1%), bond meal (1.5%) and salt (0.5%). The formula of T2 was cabagge (5%), sugarcane top (1.7%), bagasse (45.2%), Molasses (5.0%), , Thitonia sp (10.0%), rice brand (3.6%), palm kernel cake (20.0%), corn meal (7.5%), bond meal (1.5%) and salt (0.5%). The formula of T3 was cabbage (10%), sugarcane top (0%), bagasse (45.3%), Molasses (5.0%), Thitonia sp (10.0%), rice brand (3.8%), palm kernel cake (20.0%), corn meal (3.9%), bond meal (1.5%) and salt(0.5%). The formula of T4 was cabagge (15.0%), sugarcane top (0%), bagasse (44.1%), Molasses (5.0%), Thitonia sp (10.0%), rice brand (3.9%), palm kernel cake (20.0%), corn meal (0%), bond meal (1.5%) and salt (0.5%). An increase in the level of inclusion of the cabbage waste can decrease the cost of rations. The cost of rations (IDR/kg on DM basis) were 1442, 1367, 1333, and 1300 respectively. The rations formula were not significantly (P > 0.05) influent the on fungal colonies, smell, texture and color of the complete ration silage, but the pH increased significantly (P < 0.05). It concluded that inclusion of cabbage waste can minimize the cost of buffalo ration, without decreasing the silage quality of complete feed.

Keywords: buffalo, cabbage, complete feed, sillage characteristic, sugarcane waste

Procedia PDF Downloads 228
351 Compact 3-D Co-Planar Waveguide Fed Dual-Port Ultrawideband-Multiple-Input and Multiple-Output Antenna with WLAN Band-Notched Characteristics

Authors: Asim Quddus

Abstract:

A miniaturized three dimensional co-planar waveguide (CPW) two-port MIMO antenna, exhibiting high isolation and WLAN band-notched characteristics is presented in this paper for ultrawideband (UWB) communication applications. The microstrip patch antenna operates as a single UWB antenna element. The proposed design is a cuboid-shaped structure having compact size of 35 x 27 x 45 mm³. Radiating as well as decoupling structure is placed around cuboidal polystyrene sheet. The radiators are 27 mm apart, placed Face-to-Face in vertical direction. Decoupling structure is placed on the side walls of polystyrene. The proposed antenna consists of an oval shaped radiating patch. A rectangular structure with fillet edges is placed on ground plan to enhance the bandwidth. The proposed antenna exhibits a good impedance match (S11 ≤ -10 dB) over frequency band of 2 GHz – 10.6 GHz. A circular slotted structure is employed as a decoupling structure on substrate, and it is placed on the side walls of polystyrene to enhance the isolation between antenna elements. Moreover, to achieve immunity from WLAN band distortion, a modified, inverted crescent shaped slotted structure is etched on radiating patches to achieve band-rejection characteristics at WLAN frequency band 4.8 GHz – 5.2 GHz. The suggested decoupling structure provides isolation better than 15 dB over the desired UWB spectrum. The envelope correlation coefficient (ECC) and gain for the MIMO antenna are analyzed as well. Finite Element Method (FEM) simulations are carried out in Ansys High Frequency Structural Simulator (HFSS) for the proposed design. The antenna is realized on a Rogers RT/duroid 5880 with thickness 1 mm, relative permittivity ɛr = 2.2. The proposed antenna achieves a stable omni-directional radiation patterns as well, while providing rejection at desired WLAN band. The S-parameters as well as MIMO parameters like ECC are analyzed and the results show conclusively that the design is suitable for portable MIMO-UWB applications.

Keywords: 3-D antenna, band-notch, MIMO, UWB

Procedia PDF Downloads 279
350 Copula Autoregressive Methodology for Simulation of Solar Irradiance and Air Temperature Time Series for Solar Energy Forecasting

Authors: Andres F. Ramirez, Carlos F. Valencia

Abstract:

The increasing interest in renewable energies strategies application and the path for diminishing the use of carbon related energy sources have encouraged the development of novel strategies for integration of solar energy into the electricity network. A correct inclusion of the fluctuating energy output of a photovoltaic (PV) energy system into an electric grid requires improvements in the forecasting and simulation methodologies for solar energy potential, and the understanding not only of the mean value of the series but the associated underlying stochastic process. We present a methodology for synthetic generation of solar irradiance (shortwave flux) and air temperature bivariate time series based on copula functions to represent the cross-dependence and temporal structure of the data. We explore the advantages of using this nonlinear time series method over traditional approaches that use a transformation of the data to normal distributions as an intermediate step. The use of copulas gives flexibility to represent the serial variability of the real data on the simulation and allows having more control on the desired properties of the data. We use discrete zero mass density distributions to assess the nature of solar irradiance, alongside vector generalized linear models for the bivariate time series time dependent distributions. We found that the copula autoregressive methodology used, including the zero mass characteristics of the solar irradiance time series, generates a significant improvement over state of the art strategies. These results will help to better understand the fluctuating nature of solar energy forecasting, the underlying stochastic process, and quantify the potential of a photovoltaic (PV) energy generating system integration into a country electricity network. Experimental analysis and real data application substantiate the usage and convenience of the proposed methodology to forecast solar irradiance time series and solar energy across northern hemisphere, southern hemisphere, and equatorial zones.

Keywords: copula autoregressive, solar irradiance forecasting, solar energy forecasting, time series generation

Procedia PDF Downloads 295
349 Cilubaba: An Agriculture-Based Education Tool through Congklak Traditional Game as an Introduction of Home Garden for Children in Cibanteng, Bogor

Authors: Yoni Elviandri, Vivi Fitriyanti, Agung Surya Wijaya, Suryani Humayyah, Muhammad Alif Azizi

Abstract:

The massive development of computing power and internet access nowadays is marked by audiovisual games and computers which are known as electronic games, one of the examples is online games. This kind of game can be found everywhere in Indonesia, both in the cities and even the villages. In the present time, online games are becoming a popular games in various layers of the community, one of them does happen to elementary school students. As the online games spread over, the traditional games gradually fade away and even thought as an old-fashioned game. Contrary, traditional games actually have the better and higher educational values such as patience, honesty, integrity and togetherness value which cannot be found in online games which are more to individualist. A brand new set of education tools is necessary to provide a convenience, safe and fun place for children to play around but still contains educational values. One interesting example goes to Cilulaba is an agricultural-based playground. It is a good place for children to play and learn as it was planned to entertain children to play around as well as introducing agriculture to them. One of the games is a 1990’s well-known traditional game which its name is Congklak. Congklak is an agricultural-based traditional game and it also introduces the home garden to the children. Some of the Cilulaba’s aims are to protect the existence of nation’s cultural inheritance through Congklak traditional game, as a tool to introduce the agriculture to the children through the methods of Congklak traditional game and giving explanation related to the advantages of a “healthy home garden” to the children. The expected output from this place is to deliver a good understanding about agriculture to the children and make them begin to love it to make an aesthetic home garden and enhance the optimalisation usage of home garden that will support the availability of various edible plants in productive and health households. The proposed method in this Student Creative Program in Society Service is Participatory Rural Appraisal (PRA) method.

Keywords: Cilubaba, Congklak, traditional game, agricultural-based playground

Procedia PDF Downloads 415
348 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products

Authors: Maciej Jedrzejczyk, Karolina Marzantowicz

Abstract:

Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.

Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids

Procedia PDF Downloads 272
347 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions

Authors: Oscar E. Cariceo, Claudia V. Casal

Abstract:

Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.

Keywords: cyberbullying, evidence based practice, machine learning, social work research

Procedia PDF Downloads 145
346 Development of Transmission and Packaging for Parallel Hybrid Light Commercial Vehicle

Authors: Vivek Thorat, Suhasini Desai

Abstract:

The hybrid electric vehicle is widely accepted as a promising short to mid-term technical solution due to noticeably improved efficiency and low emissions at competitive costs. Retro fitment of hybrid components into a conventional vehicle for achieving better performance is the best solution so far. But retro fitment includes major modifications into a conventional vehicle with a high cost. This paper focuses on the development of a P3x hybrid prototype with rear wheel drive parallel hybrid electric Light Commercial Vehicle (LCV) with minimum and low-cost modifications. This diesel Hybrid LCV is different from another hybrid with regard to the powertrain. The additional powertrain consists of continuous contact helical gear pair followed by chain and sprocket as a coupler for traction motor. Vehicle powertrain which is designed for the intended high-speed application. This work focuses on targeting of design, development, and packaging of this unique parallel diesel-electric vehicle which is based on multimode hybrid advantages. To demonstrate the practical applicability of this transmission with P3x hybrid configuration, one concept prototype vehicle has been build integrating the transmission. The hybrid system makes it easy to retrofit existing vehicle because the changes required into the vehicle chassis are a minimum. The additional system is designed for mainly five modes of operations which are engine only mode, electric-only mode, hybrid power mode, engine charging battery mode and regenerative braking mode. Its driving performance, fuel economy and emissions are measured and results are analyzed over a given drive cycle. Finally, the output results which are achieved by the first vehicle prototype during experimental testing is carried out on a chassis dynamometer using MIDC driving cycle. The results showed that the prototype hybrid vehicle is about 27% faster than the equivalent conventional vehicle. The fuel economy is increased by 20-25% approximately compared to the conventional powertrain.

Keywords: P3x configuration, LCV, hybrid electric vehicle, ROMAX, transmission

Procedia PDF Downloads 226
345 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System

Authors: Nicholas Pearce, Eun-jin Kim

Abstract:

Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.

Keywords: cardiovascular system, left atrium, numerical model, MEF

Procedia PDF Downloads 88
344 The Influence of Mycelium Species and Incubation Protocols on Heat and Moisture Transfer Properties of Mycelium-Based Composites

Authors: Daniel Monsalve, Takafumi Noguchi

Abstract:

Mycelium-based composites (MBC) are made by growing living mycelium on lignocellulosic fibres to create a porous composite material which can be lightweight, and biodegradable, making them suitable as a sustainable thermal insulation. Thus, they can help to reduce material extraction while improving the energy efficiency of buildings, especially when agricultural by-products are used. However, as MBC are hygroscopic materials, moisture can reduce their thermal insulation efficiency. It is known that surface growth, or “mycelium skin”, can form a natural coating due to the hydrophobic properties in the mycelium cell wall. Therefore, this research aims to biofabricate a homogeneous mycelium skin and measure its influence on the final composite material by testing material properties such as thermal conductivity, vapour permeability and water absorption by partial immersion over 24 hours. In addition, porosity, surface morphology and chemical composition were also analyzed. The white-rot fungi species Pleurotus ostreatus, Ganoderma lucidum, and Trametes versicolor were grown on 10 mm hemp fibres (Cannabis sativa), and three different biofabrication protocols were used during incubation, varying the time and surface treatment, including the addition of pre-colonised sawdust. The results indicate that density can be reduced by colonisation time, which will favourably impact thermal conductivity but will negatively affect vapour and liquid water control. Additionally, different fungi can exhibit different resistance to prolonged water absorption, and due to osmotic sensitivity, mycelium skin may also diminish moisture control. Finally, a collapse in the mycelium network after water immersion was observed through SEM, indicating how the microstructure is affected, which is also dependent on fungi species and the type of skin achieved. These results help to comprehend the differences and limitations of three of the most common species used for MBC fabrication and how precise engineering is needed to effectively control the material output.

Keywords: mycelium, thermal conductivity, vapor permeability, water absorption

Procedia PDF Downloads 13
343 The Role of Zakat on Sustainable Economic Development by Rumah Zakat

Authors: Selamat Muliadi

Abstract:

This study aimed to explain conceptual the role of Zakat on sustainable economic development by Rumah Zakat. Rumah Zakat is a philanthropic institution that manages zakat and other social funds through community empowerment programs. In running the program, including economic empowerment and socio health services are designed for these recipients. Rumah Zakat's connection with the establisment of Sustainable Development Goals (SDGs) which is to help impoverished recipients economically and socially. It’s an important agenda that the government input into national development, even the region. The primary goal of Zakat on sustainable economic development, not only limited to economic variables but based on Islamic principles, has comprehensive characteristics. The characteristics include moral, material, spiritual, and social aspects. In other words, sustainable economic development is closely related to improving people’s living standard (Mustahiq). The findings provide empiricial evidence regarding the positive contribution and effectiveness of zakat targeting in reducing poverty and improve the welfare of people related with the management of zakat. The purpose of this study was to identify the role of Zakat on sustainable economic development, which was applied by Rumah Zakat. This study used descriptive method and qualitative analysis. The data source was secondary data collected from documents and texts related to the research topic, be it books, articles, newspapers, journals, or others. The results showed that the role of zakat on sustainable economic development by Rumah Zakat has been quite good and in accordance with the principle of Islamic economics. Rumah Zakat programs are adapted to support intended development. The contribution of the productive program implementation has been aligned with four goals in the Sustainable Development Goals, i.e., Senyum Juara (Quality Education), Senyum Lestari (Clean Water and Sanitation), Senyum Mandiri (Entrepreneur Program) and Senyum Sehat (Free Maternity Clinic). The performance of zakat in the sustainable economic empowerment community at Rumah Zakat is taking into account dimensions such as input, process, output, and outcome.

Keywords: Zakat, social welfare, sustainable economic development, charity

Procedia PDF Downloads 111
342 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: control, milling, multibody, robotic, simulation

Procedia PDF Downloads 227
341 Image Processing-Based Maize Disease Detection Using Mobile Application

Authors: Nathenal Thomas

Abstract:

In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.

Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot

Procedia PDF Downloads 52
340 Time's Arrow and Entropy: Violations to the Second Law of Thermodynamics Disrupt Time Perception

Authors: Jason Clarke, Michaela Porubanova, Angela Mazzoli, Gulsah Kut

Abstract:

What accounts for our perception that time inexorably passes in one direction, from the past to the future, the so-called arrow of time, given that the laws of physics permit motion in one temporal direction to also happen in the reverse temporal direction? Modern physics says that the reason for time’s unidirectional physical arrow is the relationship between time and entropy, the degree of disorder in the universe, which is evolving from low entropy (high order; thermal disequilibrium) toward high entropy (high disorder; thermal equilibrium), the second law of thermodynamics. Accordingly, our perception of the direction of time, from past to future, is believed to emanate as a result of the natural evolution of entropy from low to high, with low entropy defining our notion of ‘before’ and high entropy defining our notion of ‘after’. Here we explored this proposed relationship between entropy and the perception of time’s arrow. We predicted that if the brain has some mechanism for detecting entropy, whose output feeds into processes involved in constructing our perception of the direction of time, presentation of violations to the expectation that low entropy defines ‘before’ and high entropy defines ‘after’ would alert this mechanism, leading to measurable behavioral effects, namely a disruption in duration perception. To test this hypothesis, participants were shown briefly-presented (1000 ms or 500 ms) computer-generated visual dynamic events: novel 3D shapes that were seen either to evolve from whole figures into parts (low to high entropy condition) or were seen in the reverse direction: parts that coalesced into whole figures (high to low entropy condition). On each trial, participants were instructed to reproduce the duration of their visual experience of the stimulus by pressing and releasing the space bar. To ensure that attention was being deployed to the stimuli, a secondary task was to report the direction of the visual event (forward or reverse motion). Participants completed 60 trials. As predicted, we found that duration reproduction was significantly longer for the high to low entropy condition compared to the low to high entropy condition (p=.03). This preliminary data suggests the presence of a neural mechanism that detects entropy, which is used by other processes to construct our perception of the direction of time or time’s arrow.

Keywords: time perception, entropy, temporal illusions, duration perception

Procedia PDF Downloads 142
339 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow

Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam

Abstract:

Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.

Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety

Procedia PDF Downloads 267
338 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model

Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh

Abstract:

A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.

Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety

Procedia PDF Downloads 300