Search results for: Porous structure particle; Carbon nanoparticles; Catalyst; Spray-drying method.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11258

Search results for: Porous structure particle; Carbon nanoparticles; Catalyst; Spray-drying method.

368 Opportunities for Precision Feed in Apiculture for Managing the Efficacy of Feed and Medicine

Authors: John Michael Russo

Abstract:

Honeybees are important to our food system and continue to suffer from high rates of colony loss. Precision feed has brought many benefits to livestock cultivation and these should transfer to apiculture. However, apiculture has unique challenges. The objective of this research is to understand how principles of precision agriculture, applied to apiculture and feed specifically, might effectively improve state-of-the-art cultivation. The methodology surveys apicultural practice to build a model for assessment. First, a review of apicultural motivators is made. Feed method is then evaluated. Finally, precision feed methods are examined as accelerants with potential to advance the effectiveness of feed practice. Six important motivators emerge: colony loss, disease, climate change, site variance, operational costs, and competition. Feed practice itself is used to compensate for environmental variables. The research finds that the current state-of-the-art in apiculture feed focuses on critical challenges in the management of feed schedules which satisfy requirements of the bees, preserve potency, optimize environmental variables, and manage costs. Many of the challenges are most acute when feed is used to dispense medication. Technology such as RNA treatments have even more rigorous demands. Precision feed solutions focus on strategies which accommodate specific needs of individual livestock. A major component is data; they integrate precise data with methods that respond to individual needs. There is enormous opportunity for precision feed to improve apiculture through the integration of precision data with policies to translate data into optimized action in the apiary, particularly through automation.

Keywords: Apiculture, precision apiculture, RNA varroa treatment, honeybee feed applications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 232
367 Biomethanation of Palm Oil Mill Effluent (POME) by Membrane Anaerobic System (MAS) using POME as a Substrate

Authors: N.H. Abdurahman, Y. M. Rosli, N. H. Azhari, S. F. Tam

Abstract:

The direct discharge of palm oil mill effluent (POME) wastewater causes serious environmental pollution due to its high chemical oxygen demand (COD) and biochemical oxygen demand (BOD). Traditional ways for POME treatment have both economical and environmental disadvantages. In this study, a membrane anaerobic system (MAS) was used as an alternative, cost effective method for treating POME. Six steady states were attained as a part of a kinetic study that considered concentration ranges of 8,220 to 15,400 mg/l for mixed liquor suspended solids (MLSS) and 6,329 to 13,244 mg/l for mixed liquor volatile suspended solids (MLVSS). Kinetic equations from Monod, Contois and Chen & Hashimoto were employed to describe the kinetics of POME treatment at organic loading rates ranging from 2 to 13 kg COD/m3/d. throughout the experiment, the removal efficiency of COD was from 94.8 to 96.5% with hydraulic retention time, HRT from 400.6 to 5.7 days. The growth yield coefficient, Y was found to be 0.62gVSS/g COD the specific microorganism decay rate was 0.21 d-1 and the methane gas yield production rate was between 0.25 l/g COD/d and 0.58 l/g COD/d. Steady state influent COD concentrations increased from 18,302 mg/l in the first steady state to 43,500 mg/l in the sixth steady state. The minimum solids retention time, which was obtained from the three kinetic models ranged from 5 to 12.3 days. The k values were in the range of 0.35 – 0.519 g COD/ g VSS • d and values were between 0.26 and 0.379 d-1. The solids retention time (SRT) decreased from 800 days to 11.6 days. The complete treatment reduced the COD content to 2279 mg/l equivalent to a reduction of 94.8% reduction from the original.

Keywords: COD reduction, POME, kinetics, membrane, anaerobic, monod, contois equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2565
366 Inquiry on the Improvement Teaching Quality in the Classroom with Meta-Teaching Skills

Authors: Shahlan Surat, Saemah Rahman, Saadiah Kummin

Abstract:

When teachers reflect and evaluate whether their teaching methods actually have an impact on students’ learning, they will adjust their practices accordingly. This inevitably improves their students’ learning and performance. The approach in meta-teaching can invigorate and create a passion for teaching. It thus helps to increase the commitment and love for the teaching profession. This study was conducted to determine the level of metacognitive thinking of teachers in the process of teaching and learning in the classroom. Metacognitive thinking teachers include the use of metacognitive knowledge which consists of different types of knowledge: declarative, procedural and conditional. The ability of the teachers to plan, monitor and evaluate the teaching process can also be determined. This study was conducted on 377 graduate teachers in Klang Valley, Malaysia. The stratified sampling method was selected for the purpose of this study. The metacognitive teaching inventory consisting of 24 items is called InKePMG (Teacher Indicators of Effectiveness Meta-Teaching). The results showed the level of mean is high for two components of metacognitive knowledge; declarative knowledge (mean = 4.16) and conditional (mean = 4.11) whereas, the mean of procedural knowledge is 4.00 (moderately high). Similarly, the level of knowledge in monitoring (mean = 4.11), evaluating (mean = 4.00) which indicate high score and planning (mean = 4.00) are moderately high score among teachers. In conclusion, this study shows that the planning and procedural knowledge is an important element in improving the quality of teachers teaching in the classroom. Thus, the researcher recommended that further studies should focus on training programs for teachers on metacognitive skills and also on developing creative thinking among teachers.

Keywords: Metacognitive thinking skills, procedural knowledge, conditional knowledge, declarative knowledge, meta-teaching and regulation of cognitive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435
365 Automation of Heat Exchanger using Neural Network

Authors: Sudhir Agashe, Ashok Ghatol, Sujata Agashe

Abstract:

In this paper the development of a heat exchanger as a pilot plant for educational purpose is discussed and the use of neural network for controlling the process is being presented. The aim of the study is to highlight the need of a specific Pseudo Random Binary Sequence (PRBS) to excite a process under control. As the neural network is a data driven technique, the method for data generation plays an important role. In light of this a careful experimentation procedure for data generation was crucial task. Heat exchange is a complex process, which has a capacity and a time lag as process elements. The proposed system is a typical pipe-in- pipe type heat exchanger. The complexity of the system demands careful selection, proper installation and commissioning. The temperature, flow, and pressure sensors play a vital role in the control performance. The final control element used is a pneumatically operated control valve. While carrying out the experimentation on heat exchanger a welldrafted procedure is followed giving utmost attention towards safety of the system. The results obtained are encouraging and revealing the fact that if the process details are known completely as far as process parameters are concerned and utilities are well stabilized then feedback systems are suitable, whereas neural network control paradigm is useful for the processes with nonlinearity and less knowledge about process. The implementation of NN control reinforces the concepts of process control and NN control paradigm. The result also underlined the importance of excitation signal typically for that process. Data acquisition, processing, and presentation in a typical format are the most important parameters while validating the results.

Keywords: Process identification, neural network, heat exchanger.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
364 Antioxidant Capacity and Total Phenolic Content of Aqueous Acetone and Ethanol Extract of Edible Parts of Moringa oleifera and Sesbania grandiflora

Authors: Perumal Siddhuraju, Arumugam Abirami, Gunasekaran Nagarani, Marimuthu Sangeethapriya

Abstract:

Aqueous ethanol and aqueous acetone extracts of Moringa oleifera (outer pericarp of immature fruit and flower) and Sesbania grandiflora white variety (flower and leaf) were examined for radical scavenging capacities and antioxidant activities. Ethanol extract of S. grandiflora (flower and leaf) and acetone extract of M. oleifera (outer pericarp of immature fruit and flower) contained relatively higher levels of total dietary phenolics than the other extracts. The antioxidant potential of the extracts were assessed by employing different in vitro assays such as reducing power assay, DPPH˙, ABTS˙+ and ˙OH radical scavenging capacities, antihemolytic assay by hydrogen peroxide induced method and metal chelating ability. Though all the extracts exhibited dose dependent reducing power activity, acetone extract of all the samples were found to have more hydrogen donating ability in DPPH˙ (2.3% - 65.03%) and hydroxyl radical scavenging systems (21.6% - 77.4%) than the ethanol extracts. The potential of multiple antioxidant activity was evident as it possessed antihemolytic activity (43.2 % to 68.0 %) and metal ion chelating potency (45.16 - 104.26 mg EDTA/g sample). The result indicate that acetone extract of M. oleifera (OPIF and flower) and S. grandiflora (flower and leaf) endowed with polyphenols, could be utilized as natural antioxidants/nutraceuticals.

Keywords: Antioxidant activity, Moringa oleifera, Polyphenolics, Sesbania grandiflora, Underutilized vegetables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2407
363 Spatial Query Localization Method in Limited Reference Point Environment

Authors: Victor Krebss

Abstract:

Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.

Keywords: Intelligent Transportation System, Sensor Network, Localization, Spatial Query, GIS, Graph Embedding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535
362 Modeling Parametric Vibration of Multistage Gear Systems as a Tool for Design Optimization

Authors: James Kuria, John Kihiu

Abstract:

This work presents a numerical model developed to simulate the dynamics and vibrations of a multistage tractor gearbox. The effect of time varying mesh stiffness, time varying frictional torque on the gear teeth, lateral and torsional flexibility of the shafts and flexibility of the bearings were included in the model. The model was developed by using the Lagrangian method, and it was applied to study the effect of three design variables on the vibration and stress levels on the gears. The first design variable, module, had little effect on the vibration levels but a higher module resulted to higher bending stress levels. The second design variable, pressure angle, had little effect on the vibration levels, but had a strong effect on the stress levels on the pinion of a high reduction ratio gear pair. A pressure angle of 25o resulted to lower stress levels for a pinion with 14 teeth than a pressure angle of 20o. The third design variable, contact ratio, had a very strong effect on both the vibration levels and bending stress levels. Increasing the contact ratio to 2.0 reduced both the vibration levels and bending stress levels significantly. For the gear train design used in this study, a module of 2.5 and contact ratio of 2.0 for the various meshes was found to yield the best combination of low vibration levels and low bending stresses. The model can therefore be used as a tool for obtaining the optimum gear design parameters for a given multistage spur gear train.

Keywords: bending stress levels, frictional torque, gear designparameters, mesh stiffness, multistage gear train, vibration levels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2565
361 Introduction to Political Psychoanalysis of a Group in the Middle East

Authors: Seyedfateh Moradi, Abas Ali Rahbar

Abstract:

The present study focuses on investigating group psychoanalysis in the Middle East. The study uses a descriptive-analytic method and library resources have been used to collect the data. Additionally, the researcher’s observations of people’s everyday behavior have played an important role in the production and analysis of the study. Group psychoanalysis in the Middle East can be conducted through people’s daily behaviors, proverbs, poetry, mythology, etc., and some of the general characteristics of people in the Middle East include: xenophobia, revivalism, fatalism, nostalgic, wills and so on. Members of the group have often failed to achieve Libido wills and it is very important in unifying and reproduction violence. Therefore, if libidinal wills are irrationally fixed, it will be important in forming fundamentalist and racist groups, a situation that is dominant among many groups in the Middle East. Adversities, from early childhood and afterwards, in the subjects have always been influential in the political behavior of group members, and it manifests itself as counter-projections. Consequently, it affects the foreign policy of the governments. On the other hand, two kinds of subjects are identifiable in the Middle East, one; classical subject that is related to nostalgia and mythology and, two; modern subjects which is self-alienated. As a result, both subjects are seeking identity and self-expression in public in relation to forming groups. Therefore, collective unconscious in the Middle East shows itself as extreme boundaries and leads to forming groups characterized with violence. Psychoanalysis shows important aspects to identify many developments in the Middle East; totally analysis of Freud, Carl Jung and Reich about groups can be applied in the present Middle East.

Keywords: Politics, political psychoanalysis, group, Middle East.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1321
360 Analysis and Remediation of Fecal Coliform Bacteria Pollution in Selected Surface Water Bodies of Enugu State of Nigeria

Authors: Chime Charles C., Ikechukwu Alexander Okorie, Ekanem E.J., Kagbu J. A.

Abstract:

The assessment of surface waters in Enugu metropolis for fecal coliform bacteria was undertaken. Enugu urban was divided into three areas (A1, A2 and A3), and fecal coliform bacteria analysed in the surface waters found in these areas for four years (2005-2008). The plate count method was used for the analyses. Data generated were subjected to statistical tests involving; Normality test, Homogeneity of variance test, correlation test, and tolerance limit test. The influence of seasonality and pollution trends were investigated using time series plots. Results from the tolerance limit test at 95% coverage with 95% confidence, and with respect to EU maximum permissible concentration show that the three areas suffer from fecal coliform pollution. To this end, remediation procedure involving the use of saw-dust extracts from three woods namely; Chlorophora-Excelsa (C-Excelsa),Khayan-Senegalensis,(CSenegalensis) and Erythrophylum-Ivorensis (E-Ivorensis) in controlling the coliforms was studied. Results show that mixture of the acetone extracts of the woods show the most effective antibacterial inhibitory activities (26.00mm zone of inhibition) against E-coli. Methanol extract mixture of the three woods gave best inhibitory activity (26.00mm zone of inhibition) against S-areus, and 25.00mm zones of inhibition against E-Aerogenes. The aqueous extracts mixture gave acceptable zones of inhibitions against the three bacteria organisms.

Keywords: Coliform bacteria, Pollution, Remediation, Saw-dust

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
359 A Preliminary Analysis of Sustainable Development in the Belgrade Metropolitan Area

Authors: S. Zeković, M. Vujošević, T. Maričić

Abstract:

The paper provides a comprehensive analysis of the sustainable development in the Belgrade Metropolitan Region - BMA (level NUTS 2) preliminary evaluating the three chosen components: 1) economic growth and developmental changes; 2) competitiveness; and 3) territorial concentration and industrial specialization. First, we identified the main results of development changes and economic growth by applying Shift-share analysis on the metropolitan level. Second, the empirical evaluation of competitiveness in the BMA is based on the analysis of absolute and relative values of eight indicators by Spider method. Paper shows that the consideration of the national share, industrial mix and metropolitan/regional share in total Shift share of the BMA, as well as economic/functional specialization of the BMA indicate very strong process of deindustrialization. Allocative component of the BMA economic growth has positive value, reflecting the above-average sector productivity compared to the national average. Third, the important positive role of metropolitan/regional component in decomposition of the BMA economic growth is highlighted as one of the key results. Finally, comparative analysis of the industrial territorial concentration in the BMA in relation to Serbia is based on location quotient (LQ) or Balassa index as a valid measure. The results indicate absolute and relative differences in decrease of industry territorial concentration as well as inefficiency of utilizing territorial capital in the BMA. Results are important for the increase of regional competitiveness and territorial distribution in this area as well as for improvement of sustainable metropolitan and sector policies, planning and governance on this level.

Keywords: Belgrade Metropolitan Area (BMA), Comprehensive analysis/evaluation, economic growth and competitiveness, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
358 Pre and Post IFRS Loss Avoidance in France and the United Kingdom

Authors: T. Miková

Abstract:

This paper analyzes the effect of a single uniform accounting rule on reporting quality by investigating the influence of IFRS on earnings management. This paper examines whether earnings management is reduced after IFRS adoption through the use of “loss avoidance thresholds”, a method that has been verified in earlier studies. This paper concentrates on two European countries: one that represents the continental code law tradition with weak protection of investors (France) and one that represents the Anglo-American common law tradition, which typically implies a strong enforcement system (the United Kingdom).

The research investigates a sample of 526 companies (6822 firm-year observations) during the years 2000 – 2013. The results are different for the two jurisdictions. This study demonstrates that a single set of accounting standards contributes to better reporting quality and reduces the pervasiveness of earnings management in France. In contrast, there is no evidence that a reduction in earnings management followed the implementation of IFRS in the United Kingdom. Due to the fact that IFRS benefit France but not the United Kingdom, other political and economic factors, such legal system or capital market strength, must play a significant role in influencing the comparability and transparency cross-border companies’ financial statements. Overall, the result suggests that IFRS moderately contribute to the accounting quality of reported financial statements and bring benefit for stakeholders, though the role played by other economic factors cannot be discounted.

Keywords: Accounting Standards, Earnings Management, International Financial Reporting Standards, Loss Avoidance, Reporting Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3126
357 Numerical Simulation of Free Surface Water Wave for the Flow around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Saadia Adjali, Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRIC scheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: Free surface flows, Breaking waves, Boundary layer, Wigley hull, Volume of fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3560
356 Numerical Simulation of Free Surface Water Wave for the Flow around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Saadia Adjali, Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRIC scheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: Free surface flows, breaking waves, boundary layer, Wigley hull, volume of fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3301
355 Emotional Intelligence as Predictor of Academic Success among Third Year College Students of PIT

Authors: Sonia Arradaza-Pajaron

Abstract:

College students are expected to engage in an on-the-job training or internship for completion of a course requirement prior to graduation. In this scenario, they are exposed to the real world of work outside their training institution. To find out their readiness both emotionally and academically, this study has been conducted. A descriptive-correlational research design was employed and random sampling technique method was utilized among 265 randomly selected third year college students of PIT, SY 2014-15. A questionnaire on Emotional Intelligence (bearing the four components namely; emotional literacy, emotional quotient competence, values and beliefs and emotional quotient outcomes) was fielded to the respondents and GWA was extracted from the school automate. Data collected were statistically treated using percentage, weighted mean and Pearson-r for correlation.

Results revealed that respondents’ emotional intelligence level is moderately high while their academic performance is good. A high significant relationship was found between the EI component; Emotional Literacy and their academic performance while only significant relationship was found between Emotional Quotient Outcomes and their academic performance. Therefore, if EI influences academic performance significantly when correlated, a possibility that their OJT performance can also be affected either positively or negatively. Thus, EI can be considered predictor of their academic and academic-related performance. Based on the result, it is then recommended that the institution would try to look deeply into the consideration of embedding emotional intelligence as part of the (especially on Emotional Literacy and Emotional Quotient Outcomes of the students) college curriculum. It can be done if the school shall have an effective Emotional Intelligence framework or program manned by qualified and competent teachers, guidance counselors in different colleges in its implementation.

Keywords: Academic performance, emotional intelligence, emotional literacy, emotional quotient competence, emotional quotient outcomes, values and beliefs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
354 Occurrence of Foreign Matter in Food: Applied Identification Method - Association of Official Agricultural Chemists (AOAC) and Food and Drug Administration (FDA)

Authors: E. C. Mattos, V. S. M. G. Daros, R. Dal Col, A. L. Nascimento

Abstract:

The aim of this study is to present the results of a retrospective survey on the foreign matter found in foods analyzed at the Adolfo Lutz Institute, from July 2001 to July 2015. All the analyses were conducted according to the official methods described on Association of Official Agricultural Chemists (AOAC) for the micro analytical procedures and Food and Drug Administration (FDA) for the macro analytical procedures. The results showed flours, cereals and derivatives such as baking and pasta products were the types of food where foreign matters were found more frequently followed by condiments and teas. Fragments of stored grains insects, its larvae, nets, excrement, dead mites and rodent excrement were the most foreign matter found in food. Besides, foreign matters that can cause a physical risk to the consumer’s health such as metal, stones, glass, wood were found but rarely. Miscellaneous (shell, sand, dirt and seeds) were also reported. There are a lot of extraneous materials that are considered unavoidable since are something inherent to the product itself, such as insect fragments in grains. In contrast, there are avoidable extraneous materials that are less tolerated because it is preventable with the Good Manufacturing Practice. The conclusion of this work is that although most extraneous materials found in food are considered unavoidable it is necessary to keep the Good Manufacturing Practice throughout the food processing as well as maintaining a constant surveillance of the production process in order to avoid accidents that may lead to occurrence of these extraneous materials in food.

Keywords: Food contamination, extraneous materials, foreign matter, surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3701
353 The Effect of CPU Location in Total Immersion of Microelectronics

Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson

Abstract:

Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.

Keywords: CPU location, data centre cooling, heat sink in enclosures, Immersed microelectronics, turbulent natural convection in enclosures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2172
352 Participation in Co-Curricular Activities of Undergraduate Nursing Students Attending the Leadership Promoting Program Based on Self-Directed Learning Approach

Authors: Porntipa Taksin, Jutamas Wongchan, Amornrat Karamee

Abstract:

The researchers’ experience of student affairs in 2011-2013, we found that few undergraduate nursing students become student association members who participated in co-curricular activities, they have limited skill of self-directed-learning and leadership. We developed “A Leadership Promoting Program” using Self-Directed Learning concept. The program included six activities: 1) Breaking the ice, Decoding time, Creative SMO, Know me-Understand you, Positive thinking, and Creative dialogue, which include four aspects of these activities: decision-making, implementation, benefits, and evaluation. The one-group, pretest-posttest quasi-experimental research was designed to examine the effects of the program on participation in co-curricular activities. Thirty five students participated in the program. All were members of the board of undergraduate nursing student association of Boromarajonani College of Nursing, Chonburi. All subjects completed the questionnaire about participation in the activities at beginning and at the end of the program. Data were analyzed using descriptive statistics and dependent t-test. The results showed that the posttest scores of all four aspects mean were significantly higher than the pretest scores (t=3.30, p<.01). Three aspects had high mean scores, Benefits (Mean = 3.24, S.D. = 0.83), Decision-making (Mean = 3.21, S.D. = 0.59), and Implementation (Mean=3.06, S.D.=0.52). However, scores on evaluation falls in moderate scale (Mean = 2.68, S.D. = 1.13). Therefore, the Leadership Promoting Program based on Self-Directed Learning Approach could be a method to improve students’ participation in co-curricular activities and leadership.

Keywords: Participation in co-curricular activities, undergraduate nursing students, leadership promoting program, self-directed learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
351 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach

Authors: Ashly Joseph, Jithu Paulose

Abstract:

In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.

Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83
350 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
349 The Effects of North Sea Caspian Pattern Index on the Temperature and Precipitation Regime in the Aegean Region of Turkey

Authors: Cenk Sezen, Turgay Partal

Abstract:

North Sea Caspian Pattern Index (NCP) refers to an atmospheric teleconnection between the North Sea and North Caspian at the 500 hPa geopotential height level. The aim of this study is to search for effects of NCP on annual and seasonal mean temperature and also annual and seasonal precipitation totals in the Aegean region of Turkey. The study contains the data that consist of 46 years obtained from nine meteorological stations. To determine the relationship between NCP and the climatic parameters, firstly the Pearson correlation coefficient method was utilized. According to the results of the analysis, most of the stations in the region have a high negative correlation NCPI in all seasons, especially in the winter season in terms of annual and seasonal mean temperature (statistically at significant at the 90% level). Besides, high negative correlation values between NCPI and precipitation totals are observed during the winter season at the most of stations. Furthermore, the NCPI values were divided into two group as NCPI(-) and NCPI(+), and then mean temperature and precipitation total values, which are grouped according to the NCP(-) and NCP(+) phases, were determined as annual and seasonal. During the NCPI(-), higher mean temperature values are observed in all of seasons, particularly in the winter season compared to the mean temperature values under effect of NCP(+). Similarly, during the NCPI(-) in winter season precipitation total values have higher than the precipitation total values under the effect of NCP(+); however, in other seasons there no substantial changes were observed between the precipitation total values. As a result of this study, significant proof is obtained with regards to the influences of NCP on the temperature and precipitation regime in the Aegean region of Turkey.

Keywords: Aegean Region, North Sea Caspian Pattern, precipitation, temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1229
348 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
347 Assessment of Breeding Soundness by Comparative Radiography and Ultrasonography of Rabbit Testes

Authors: Adenike O. Olatunji-Akioye, Emmanual B Farayola

Abstract:

In order to improve the animal protein recommended daily intake of Nigerians, there is an upsurge in breeding of hitherto shunned food animals one of which is the rabbit. Radiography and ultrasonography are tools for diagnosing disease and evaluating the anatomical architecture of parts of the body non-invasively. As the rabbit is becoming a more important food animal, to achieve improved breeding of these animals, the best of the species form a breeding stock and will usually depend on breeding soundness which may be evaluated by assessment of the male reproductive organs by these tools. Four male intact rabbits weighing between 1.2 to 1.5 kg were acquired and acclimatized for 2 weeks. Dorsoventral views of the testes were acquired using a digital radiographic machine and a 5 MHz portable ultrasound scanner was used to acquire images of the testes in longitudinal, sagittal and transverse planes. Radiographic images acquired revealed soft tissue images of the testes in all rabbits. The testes lie in individual scrotal sacs sides on both sides of the midline at the level of the caudal vertebrae and thus are superimposed by caudal vertebrae and the caudal limits of the pelvic girdle. The ultrasonographic images revealed mostly homogenously hypoechogenic testes and a hyperechogenic mediastinum testis. The dorsal and ventral poles of the testes were heterogeneously hypoechogenic and correspond to the epididymis and spermatic cord. The rabbit is unique in the ability to retract the testes particularly when stressed and so careful and stressless handling during the procedures is of paramount importance. The imaging of rabbit testes can be safely done using both imaging methods but ultrasonography is a better method of assessment and evaluation of soundness for breeding.

Keywords: Breeding soundness, rabbits, radiography, ultrasonography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 884
346 Difference in Psychological Well-Being Based On Comparison of Religions: A Case Study in Pekan District, Pahang, Malaysia

Authors: Amran Hassan, Fatimah Yusooff, Khadijah Alavi

Abstract:

The psychological well-being of a family is a subjective matter for evaluation, all the more when it involves the element of religions, whether Islam, Christianity, Buddhism or Hinduism. Each of these religions emphasises similar values and morals on family psychological well-being. This comparative study is specifically to determine the role of religion on family psychological well-being in Pekan district, Pahang, Malaysia. The study adopts a quantitative and qualitative mixed method design and considers a total of 412 samples of parents and children for the quantitative study, and 21 samples for the qualitative study. The quantitative study uses simple random sampling, whereas the qualitative sampling is purposive. The instrument for quantitative study is Ryff’s Psychological Well-being Scale and the qualitative study involves the construction of a guidelines protocol for in-depth interviews of respondents. The quantitative study uses the SPSS version .19 with One Way Anova, and the qualitative analysis is manual based on transcripts with specific codes and themes. The results show nonsignificance, that is, no significant difference among religions in all family psychological well-being constructs in the comparison of Islam, Christianity, Buddhism and Hinduism, thereby accepting a null hypothesis and rejecting an alternative hypothesis. The qualitative study supports the quantitative study, that is, all 21 respondents explain that no difference exists in psychological wellbeing in the comparison of teachings in all the religious mentioned. These implications may be used as guidelines for government and non-government bodies in considering religion as an important element in family psychological well-being in the long run. 

Keywords: Psychological well-being, comparison of religions, family, Malaysia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2334
345 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593
344 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.

Abstract:

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
343 Role of Fish Hepatic Aldehyde Oxidase in Oxidative in vitro Metabolism of Phenanthridine Heterocyclic Aromatic Compound

Authors: Khaled S. Al Salhen

Abstract:

Aldehyde oxidase is molybdo-flavoenzyme involved in the oxidation of hundreds of endogenous and exogenous and N-heterocyclic compounds and environmental pollutants. Uncharged N-heterocyclic aromatic compounds such phenanthridine are commonly distributed pollutants in soil, air, sediments, surface water and groundwater, and in animal and plant tissues. Phenanthridine as uncharged N-heterocyclic aromatic compound was incubated with partially purified aldehyde oxidase from rainbow trout fish liver. Reversed-phase HLPC method was used to separate the oxidation products from phenanthridine and the metabolite was identified. The 6(5H)-phenanthridinone was identified the major metabolite by partially purified aldehyde oxidase from fish liver. Kinetic constant for the oxidation reactions were determined spectrophotometrically and showed that this substrate has a good affinity (Km = 78 ± 7.6µM) for hepatic aldehyde oxidase, will be a significant pathway. This study confirms that partially purified aldehyde oxidase from fish liver is indeed the enzyme responsible for the in vitro production 6(5H)-phenanthridinone metabolite as it is a major metabolite by mammalian aldehyde oxidase, coupled with a relatively high oxidation rate (0.77± 0.03 nmol/min/mg protein). In addition, the kinetic parameters of hepatic fish aldehyde oxidase towards the phenanthridine substrate indicate that in vitro biotransformation by hepatic fish aldehyde oxidase will be a significant pathway. This study confirms that partially purified aldehyde oxidase from fish liver is indeed the enzyme responsible for the in vitro production 6(5H)-phenanthridinone metabolite as it is a major metabolite by mammalian aldehyde oxidase.

Keywords: Aldehyde oxidase, Fish, Phenanthridine, Specificity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2278
342 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm

Authors: Yesubai Rubavathi Charles, Ravi Ramraj

Abstract:

In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.

Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
341 Fake Account Detection in Twitter Based on Minimum Weighted Feature set

Authors: Ahmed El Azab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny

Abstract:

Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.

Keywords: Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5835
340 Development and Optimization of Colon Targeted Drug Delivery System of Ayurvedic Churna Formulation Using Eudragit L100 and Ethyl Cellulose as Coating Material

Authors: Anil Bhandari, Imran Khan Pathan, Peeyush K. Sharma, Rakesh K. Patel, Suresh Purohit

Abstract:

The purpose of this study was to prepare time and pH dependent release tablets of Ayurvedic Churna formulation and evaluate their advantages as colon targeted drug delivery system. The Vidangadi Churna was selected for this study which contains Embelin and Gallic acid. Embelin is used in Helminthiasis as therapeutic agent. Embelin is insoluble in water and unstable in gastric environment so it was formulated in time and pH dependent tablets coated with combination of two polymers Eudragit L100 and ethyl cellulose. The 150mg of core tablet of dried extract and lactose were prepared by wet granulation method. The compression coating was used in the polymer concentration of 150mg for both the layer as upper and lower coating tablet was investigated. The results showed that no release was found in 0.1 N HCl and pH 6.8 phosphate buffers for initial 5 hours and about 98.97% of the drug was released in pH 7.4 phosphate buffer in total 17 Hours. The in vitro release profiles of drug from the formulation could be best expressed first order kinetics as highest linearity (r2= 0.9943). The results of the present study have demonstrated that the time and pH dependent tablets system is a promising vehicle for preventing rapid hydrolysis in gastric environment and improving oral bioavailability of Embelin and Gallic acid for treatment of Helminthiasis.

Keywords: Embelin, Gallic acid, Vidangadi Churna, Colon targeted drug delivery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
339 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966