Search results for: applied science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10602

Search results for: applied science

2322 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 208
2321 Stun Practices in Swine in the Valle De Aburrá and Animal Welfare

Authors: Natalia Uribe Corrales, Carolina Cano Arroyave, Santiago Henao Villegas

Abstract:

Introduction: Stunning is an important stage in the meat industry due to the repercussions on the characteristics of the carcass. It has been demonstrated that inadequate stun can lead to hematomas, fractures and promote the appearance of pale, soft and exudative meat due to the stress caused in animals. In Colombia, gas narcosis and electrical stunning are the two authorized methods in pigs. Objective: To describe the practices of stunning in the Valle de Aburrá and its relation with animal welfare. Methods: A descriptive cross - sectional study was carried out in Valle de Aburrá slaughterhouses, which were authorized by National Institute for Food and Medicine Surveillance (INVIMA). Variables such as stunning method, presence of vocalization, falls, slips, rhythmic breathing, corneal reflex and attempts to incorporate after stunning, stun time and time between stun and bleeding were analyzed. Results: 225 pigs were analyzed, finding that 50.2% had electrical stun, whose amperage and voltage were 1.23 (A) and 120 (V) respectively; 49.8% of the animals were stunned with CO2 chamber whose concentration was always above 95%, the mean desensitization time was 16.8 seconds (d.e.5.37); the mean time of stunning - bleeding was 47.9 seconds (d.e.13.9); similarly, it was found that 27.1% had vocalizations after stunning; 12% had falls; 10.7% showed rhythmic breathing; 33.3% exhibited corneal reflex; and 10.7% had reincorporation attempts. Conclusions: The methods of stunning used in the Valle de Aburrá, although performed with those permitted by law, are shortcomings in relation to the amperage and voltage used for each type of pig, as well, it is found that welfare animal is being violated to find signology of an inadequate desensitization. It is necessary to promote compliance with the principles of stunning according to Animal Welfare, and keep in mind that in electrical desensitization, the calibration of the equipment must be guaranteed (pressure according to the type of animal or current applied and the position where the electrodes are) and in the narcosis the equipment should be calibrated to ensure proper gas concentration and exposure time.

Keywords: animal welfare, pigs, quality of meat, stun methods

Procedia PDF Downloads 221
2320 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 401
2319 Authentication and Traceability of Meat Products from South Indian Market by Species-Specific Polymerase Chain Reaction

Authors: J. U. Santhosh Kumar, V. Krishna, Sebin Sebastian, G. S. Seethapathy, G. Ravikanth, R. Uma Shaanker

Abstract:

Food is one of the basic needs of human beings. It requires the normal function of the body part and a healthy growth. Recently, food adulteration increases day by day to increase the quantity and make more benefit. Animal source foods can provide a variety of micronutrients that are difficult to obtain in adequate quantities from plant source foods alone. Particularly in the meat industry, products from animals are susceptible targets for fraudulent labeling due to the economic profit that results from selling cheaper meat as meat from more profitable and desirable species. This work presents an overview of the main PCR-based techniques applied to date to verify the authenticity of beef meat and meat products from beef species. We were analyzed 25 market beef samples in South India. We examined PCR methods based on the sequence of the cytochrome b gene for source species identification. We found all sample were sold as beef meat as Bos Taurus. However, interestingly Male meats are more valuable high price compare to female meat, due to this reason most of the markets samples are susceptible. We were used sex determination gene of cattle like TSPY(Y-encoded, testis-specific protein TSPY is a Y-specific gene). TSPY homologs exist in several mammalian species, including humans, horses, and cattle. This gene is Y coded testis protein genes, which only amplify the male. We used multiple PCR products form species-specific “fingerprints” on gel electrophoresis, which may be useful for meat authentication. Amplicons were obtained only by the Cattle -specific PCR. We found 13 market meat samples sold as female beef samples. These results suggest that the species-specific PCR methods established in this study would be useful for simple and easy detection of adulteration of meat products.

Keywords: authentication, meat products, species-specific, TSPY

Procedia PDF Downloads 370
2318 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration

Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef

Abstract:

Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.

Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab

Procedia PDF Downloads 376
2317 Heavy Oil Recovery with Chemical Viscosity-Reduction: An Innovative Low-Carbon and Low-Cost Technology

Authors: Lin Meng, Xi Lu, Haibo Wang, Yong Song, Lili Cao, Wenfang Song, Yong Hu

Abstract:

China has abundant heavy oil resources, and thermal recovery is the main recovery method for heavy oil reservoirs. However, high energy consumption, high carbon emission and high production costs make heavy oil thermal recovery unsustainable. It is urgent to explore a replacement for developing technology. A low Carbon and cost technology of heavy oil recovery, chemical viscosity-reduction in layer (CVRL), is developed by the petroleum exploration and development research institute of Sinopec via investigated mechanisms, synthesized products, and improved oil production technologies, as follows: (1) Proposed a cascade viscous mechanism of heavy oil. Asphaltene and resin grow from free molecules to associative structures further to bulk aggregations by π - π stacking and hydrogen bonding, which causes the high viscosity of heavy oil. (2) Aimed at breaking the π - π stacking and hydrogen bond of heavy oil, the copolymer of N-(3,4-dihydroxyphenethyl) acryl amide and 2-Acrylamido-2-methylpropane sulfonic acid was synthesized as a viscosity reducer. It achieves a viscosity reduction rate of>80% without shearing for heavy oil (viscosity < 50000 mPa‧s), of which fluidity is evidently improved in the layer. (3) Synthesized hydroxymethyl acrylamide-maleic acid-decanol ternary copolymer self-assembly plugging agent. The particle size is 0.1 μm-2 mm adjustable, and the volume is 10-500 times controllable, which can achieve the efficient transportation of viscosity reducer to enriched oil areas. CVRL has applied 400 wells until now, increasing oil production by 470000 tons, saving 81000 tons of standard coal, reducing CO2 emissions by 174000 tons, and reducing production costs by 60%. It promotes the transformation of heavy oil towards low energy consumption, low carbon emissions, and low-cost development.

Keywords: heavy oil, chemical viscosity-reduction, low carbon, viscosity reducer, plugging agent

Procedia PDF Downloads 67
2316 Non-Destructive Test of Bar for Determination of Critical Compression Force Directed towards the Pole

Authors: Boris Blostotsky, Elia Efraim

Abstract:

The phenomenon of buckling of structural elements under compression is revealed in many cases of loading and found consideration in many structures and mechanisms. In the present work the method and results of dynamic test for buckling of bar loaded by a compression force directed towards the pole are considered. Experimental determination of critical force for such system has not been made previously. The tested object is a bar with semi-rigid connection to the base at one of its ends, and with a hinge moving along a circle at the other. The test includes measuring the natural frequency of the bar at different values of compression load. The lateral stiffness is calculated based on natural frequency and reduced mass on the bar's movable end. The critical load is determined by extrapolation the values of lateral stiffness up to zero value. For the experimental investigation the special test-bed was created that allows the stability testing at positive and negative curvature of the movable end's trajectory, as well as varying the rotational stiffness of the other end connection. Decreasing a friction at the movable end allows extend the diapason of applied compression force. The testing method includes: - Methodology of the experiment planning, that allows determine the required number of tests under various loads values in the defined range and the type of extrapolating function; - Methodology of experimental determination of reduced mass at the bar's movable end including its own mass; - Methodology of experimental determination of lateral stiffness of uncompressed bar rotational semi-rigid connection at the base. For planning the experiment and for comparison of the experimental results with the theoretical values of critical load, the analytical dependencies of lateral stiffness of the bar with defined end conditions on compression load. In the particular case of perfectly rigid connection of the bar to the base, the critical load value corresponds to solution by S.P. Timoshenko. Correspondence of the calculated and experimental values was obtained.

Keywords: non-destructive test, buckling, dynamic method, semi-rigid connections

Procedia PDF Downloads 353
2315 Low Pricing Strategy of Forest Products in Community Forestry Program: Subsidy to the Forest Users or Loss of Economy?

Authors: Laxuman Thakuri

Abstract:

Community-based forest management is often glorified as one of the best forest management alternatives in the developing countries like Nepal. It is also believed that the transfer of forest management authorities to local communities is decisive to take efficient decisions, maximize the forest benefits and improve the people’s livelihood. The community forestry of Nepal also aims to maximize the forest benefits; share them among the user households and improve their livelihood. However, how the local communities fix the price of forest products and local pricing made by the forest user groups affects to equitable forest benefits-sharing among the user households and their livelihood improvement objectives, the answer is largely silent among the researchers and policy-makers alike. This study examines local pricing system of forest products in the lowland community forestry and its effects on equitable benefit-sharing and livelihood improvement objectives. The study discovered that forest user groups fixed the price of forest products based on three criteria: i) costs incur in harvesting, ii) office operation costs, and iii) livelihood improvement costs through community development and income generating activities. Since user households have heterogeneous socio-economic conditions, the forest user groups have been applied low pricing strategy even for high-value forest products that the access of socio-economically worse-off households can be increased. However, the results of forest products distribution showed that as a result of low pricing strategy the access of socio-economically better-off households has been increasing at higher rate than worse-off and an inequality situation has been created. Similarly, the low pricing strategy is also found defective to livelihood improvement objectives. The study suggests for revising the forest products pricing system in community forest management and reforming the community forestry policy as well.

Keywords: community forestry, forest products pricing, equitable benefit-sharing, livelihood improvement, Nepal

Procedia PDF Downloads 293
2314 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites

Authors: Sarra Haouala, Issam Doghri

Abstract:

In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.

Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization

Procedia PDF Downloads 364
2313 The Effect of Technology on Legal Securities and Privacy Issues

Authors: Nancy Samuel Reyad Farhan

Abstract:

even though international crook law has grown considerably inside the ultimate decades, it still remains fragmented and lacks doctrinal cohesiveness. Its idea is defined within the doctrine as pretty disputable. there is no concrete definition of the term. in the home doctrine, the hassle of crook law troubles that rise up within the worldwide setting, and international troubles that get up in the national crook regulation, is underdeveloped each theoretically and nearly. To the exceptional of writer’s know-how, there aren't any studies describing worldwide elements of crook law in a complete way, taking a more expansive view of the difficulty. This paper provides consequences of a part of the doctoral studies, assignment a theoretical framework of the worldwide crook law. It ambitions at checking out the present terminology on international components of criminal law. It demonstrates differences among the notions of global crook regulation, criminal regulation international and law worldwide crook. It confronts the belief of crook regulation with associated disciplines and indicates their interplay. It specifies the scope of international criminal regulation. It diagnoses the contemporary criminal framework of global components of criminal regulation, referring to each crook law issues that rise up inside the international setting, and international problems that rise up within the context of national criminal law. ultimately, de lege lata postulates had been formulated and route of modifications in global criminal law turned into proposed. The followed studies hypothesis assumed that the belief of international criminal regulation became inconsistent, not understood uniformly, and there has been no conformity as to its location inside the system of regulation, objective and subjective scopes, while the domestic doctrine did not correspond with international requirements and differed from the global doctrine. applied research strategies covered inter alia a dogmatic and legal technique, an analytical technique, a comparative approach, in addition to desk studies.

Keywords: social networks privacy issues, social networks security issues, social networks privacy precautions measures, social networks security precautions measures

Procedia PDF Downloads 21
2312 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination

Authors: N. Santatriniaina, J. Deseure, T. Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana

Abstract:

Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 mm is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.

Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization

Procedia PDF Downloads 500
2311 ELISA Based hTSH Assessment Using Two Sensitive and Specific Anti-hTSH Polyclonal Antibodies

Authors: Maysam Mard-Soltani, Mohamad Javad Rasaee, Saeed Khalili, Abdol Karim Sheikhi, Mehdi Hedayati

Abstract:

Production of specific antibody responses against hTSH is a cumbersome process due to the high identity between the hTSH and the other members of the glycoprotein hormone family (FSH, LH and HCG) and the high identity between the human hTSH and host animals for antibody production. Therefore, two polyclonal antibodies were purified against two recombinant proteins. Four possible ELISA tests were designed based on these antibodies. These ELISA tests were checked against hTSH and other glycoprotein hormones, and their sensitivity and specificity were assessed. Bioinformatics tools were used to analyze the immunological properties. After the immunogen region selection from hTSH protein, c terminal of B hTSH was selected and applied. Two recombinant genes, with these cut pieces (first: two repeats of C terminal of B hTSH, second: tetanous toxin+B hTSH C terminal), were designed and sub-cloned into the pET32a expression vector. Standard methods were used for protein expression, purification, and verification. Thereafter, immunizations of the white New Zealand rabbits were performed and the serums of them were used for antibody titration, purification and characterization. Then, four ELISA tests based on two antibodies were employed to assess the hTSH and other glycoprotein hormones. The results of these assessments were compared with standard amounts. The obtained results indicated that the desired antigens were successfully designed, sub-cloned, expressed, confirmed and used for in vivo immunization. The raised antibodies were capable of specific and sensitive hTSH detection, while the cross reactivity with the other members of the glycoprotein hormone family was minimum. Among the four designed tests, the test in which the antibody against first protein was used as capture antibody, and the antibody against second protein was used as detector antibody did not show any hook effect up to 50 miu/l. Both proteins have the ability to induce highly sensitive and specific antibody responses against the hTSH. One of the antibody combinations of these antibodies has the highest sensitivity and specificity in hTSH detection.

Keywords: hTSH, bioinformatics, protein expression, cross reactivity

Procedia PDF Downloads 183
2310 Drugstore Control System Design and Realization Based on Programmable Logic Controller (PLC)

Authors: Muhammad Faheem Khakhi, Jian Yu Wang, Salman Muhammad, Muhammad Faisal Shabir

Abstract:

Population growth and Chinese two-child policy will boost pharmaceutical market, and it will continue to maintain the growth for a period of time in the future, the traditional pharmacy dispensary has been unable to meet the growing medical needs of the peoples. Under the strong support of the national policy, the automatic transformation of traditional pharmacies is the inclination of the Times, the new type of intelligent pharmacy system will continue to promote the development of the pharmaceutical industry. Under this background, based on PLC control, the paper proposed an intelligent storage and automatic drug delivery system; complete design of the lower computer's control system and the host computer's software system has been present. The system can be applied to dispensing work for Chinese herbal medicinal and Western medicines. Firstly, the essential of intelligent control system for pharmacy is discussed. After the analysis of the requirements, the overall scheme of the system design is presented. Secondly, introduces the software and hardware design of the lower computer's control system, including the selection of PLC and the selection of motion control system, the problem of the human-computer interaction module and the communication between PC and PLC solves, the program design and development of the PLC control system is completed. The design of the upper computer software management system is described in detail. By analyzing of E-R diagram, built the establish data, the communication protocol between systems is customize, C++ Builder is adopted to realize interface module, supply module, main control module, etc. The paper also gives the implementations of the multi-threaded system and communication method. Lastly, each module of the lower computer control system is tested. Then, after building a test environment, the function test of the upper computer software management system is completed. On this basis, the entire control system accepts the overall test.

Keywords: automatic pharmacy, PLC, control system, management system, communication

Procedia PDF Downloads 302
2309 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 295
2308 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.

Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC

Procedia PDF Downloads 236
2307 Application of Building Information Modelling In Analysing IGBC® Ratings (Sustainability Analyses)

Authors: Lokesh Harshe

Abstract:

The building construction sector is using 36% of global energy consumption with 39% of CO₂ emission. Professionals in the Built Environment Sector have long been aware of the industry’s contribution towards CO₂ emissions and are now moving towards more sustainable practices. As a result of this, many organizations have introduced rating systems to address the issue of global warming in the construction sector by ranking construction projects based on sustainability parameters. The pre-construction phase of any building project is the most essential time to make decisions for addressing the sustainability aspects. Traditionally, it is very difficult to collect data from different stakeholders and bring it together to form a decision based on factual data to perform sustainability analyses in the pre-construction phase. Building Information Modelling (BIM) is the solution where one single model is the result of the collaborative approach of BIM processes where all the information is shared, extracted, communicated, and stored on a single platform that everyone can access and make decisions based on real-time data. The focus of this research is on the Indian Green Rating System IGBC® with the objective of understanding IGBC® requirements and developing a framework to create the relationship between the rating processes and BIM. A Hypothetical (Architectural) model of a hostel building is developed using AutoCAD 2019 & Revit Arch. 2019, where the framework is applied to generate results on sustainability analysis using Green Building Studio (GBS) and Revit Add-ins. The results of any sustainability analysis are generated within a fraction of a minute, which is very quick in comparison with traditional sustainability analysis. This may save a considerable amount of time as well as cost. The future scope is to integrate Architectural, Structural, and MEP Models to perform accurate sustainability analyses with inputs from industry professionals working on real-life Green BIM projects.

Keywords: sustainability analyses, BIM, green rating systems, IGBC®, LEED

Procedia PDF Downloads 48
2306 A Study on Improvement of the Torque Ripple and Demagnetization Characteristics of a PMSM

Authors: Yong Min You

Abstract:

The study on the torque ripple of Permanent Magnet Synchronous Motors (PMSMs) has been rapidly progressed, which effects on the noise and vibration of the electric vehicle. There are several ways to reduce torque ripple, which are the increase in the number of slots and poles, the notch of the rotor and stator teeth, and the skew of the rotor and stator. However, the conventional methods have the disadvantage in terms of material cost and productivity. The demagnetization characteristic of PMSMs must be attained for electric vehicle application. Due to rare earth supply issue, the demand for Dy-free permanent magnet has been increasing, which can be applied to PMSMs for the electric vehicle. Dy-free permanent magnet has lower the coercivity; the demagnetization characteristic has become more significant. To improve the torque ripple as well as the demagnetization characteristics, which are significant parameters for electric vehicle application, an unequal air-gap model is proposed for a PMSM. A shape optimization is performed to optimize the design variables of an unequal air-gap model. Optimal design variables are the shape of an unequal air-gap and the angle between V-shape magnets. An optimization process is performed by Latin Hypercube Sampling (LHS), Kriging Method, and Genetic Algorithm (GA). Finite element analysis (FEA) is also utilized to analyze the torque and demagnetization characteristics. The torque ripple and the demagnetization temperature of the initial model of 45kW PMSM with unequal air-gap are 10 % and 146.8 degrees, respectively, which are reaching a critical level for electric vehicle application. Therefore, the unequal air-gap model is proposed, and then an optimization process is conducted. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 7.7 %. In addition, the demagnetization temperature of the optimized model was also increased by 1.8 % while maintaining the efficiency. From these results, a shape optimized unequal air-gap PMSM has shown the usefulness of an improvement in the torque ripple and demagnetization temperature for the electric vehicle.

Keywords: permanent magnet synchronous motor, optimal design, finite element method, torque ripple

Procedia PDF Downloads 271
2305 A Theoretical Study on Pain Assessment through Human Facial Expresion

Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee

Abstract:

A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.

Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)

Procedia PDF Downloads 340
2304 Prediction of Seismic Damage Using Scalar Intensity Measures Based on Integration of Spectral Values

Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou

Abstract:

A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are nonstructure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.

Keywords: damage measures, bidirectional excitation, spectral based IMs, R/C buildings

Procedia PDF Downloads 322
2303 Portuguese Guitar Strings Characterization and Comparison

Authors: P. Serrão, E. Costa, A. Ribeiro, V. Infante

Abstract:

The characteristic sonority of the Portuguese guitar is in great part what makes Fado so distinguishable from other traditional song styles. The Portuguese guitar is a pear-shaped plucked chordophone with six courses of double strings. This study compares the two types of plain strings available for Portuguese guitar and used by the musicians. One is stainless steel spring wire, the other is high carbon spring steel (music wire). Some musicians mention noticeable differences in sound quality between these two string materials, such as a little more brightness and sustain in the steel strings. Experimental tests were performed to characterize string tension at pitch; mechanical strength and tuning stability using the universal testing machine; dimensional control and chemical composition analysis using the scanning electron microscope. The string dynamical behaviour characterization experiments, including frequency response, inharmonicity, transient response, damping phenomena and were made in a monochord test set-up designed and built in-house. Damping factor was determined for the fundamental frequency. As musicians are able to detect very small damping differences, an accurate a characterization of the damping phenomena for all harmonics was necessary. With that purpose, another improved monochord was set and a new system identification methodology applied. Due to the complexity of this task several adjustments were necessary until obtaining good experimental data. In a few cases, dynamical tests were repeated to detect any evolution in damping parameters after break-in period when according to players experience a new string sounds gradually less dull until reaching the typically brilliant timbre. Finally, each set of strings was played on one guitar by a distinguished player and recorded. The recordings which include individual notes, scales, chords and a study piece, will be analysed to potentially characterize timbre variations.

Keywords: damping factor, music wire, portuguese guitar, string dynamics

Procedia PDF Downloads 549
2302 Polycode Texts in Communication of Antisocial Groups: Functional and Pragmatic Aspects

Authors: Ivan Potapov

Abstract:

Background: The aim of this paper is to investigate poly code texts in the communication of youth antisocial groups. Nowadays, the notion of a text has numerous interpretations. Besides all the approaches to defining a text, we must take into account semiotic and cultural-semiotic ones. Rapidly developing IT, world globalization, and new ways of coding of information increase the role of the cultural-semiotic approach. However, the development of computer technologies leads also to changes in the text itself. Polycode texts play a more and more important role in the everyday communication of the younger generation. Therefore, the research of functional and pragmatic aspects of both verbal and non-verbal content is actually quite important. Methods and Material: For this survey, we applied the combination of four methods of text investigation: not only intention and content analysis but also semantic and syntactic analysis. Using these methods provided us with information on general text properties, the content of transmitted messages, and each communicants’ intentions. Besides, during our research, we figured out the social background; therefore, we could distinguish intertextual connections between certain types of polycode texts. As the sources of the research material, we used 20 public channels in the popular messenger Telegram and data extracted from smartphones, which belonged to arrested members of antisocial groups. Findings: This investigation let us assert that polycode texts can be characterized as highly intertextual language unit. Moreover, we could outline the classification of these texts based on communicants’ intentions. The most common types of antisocial polycode texts are a call to illegal actions and agitation. What is more, each type has its own semantic core: it depends on the sphere of communication. However, syntactic structure is universal for most of the polycode texts. Conclusion: Polycode texts play important role in online communication. The results of this investigation demonstrate that in some social groups using these texts has a destructive influence on the younger generation and obviously needs further researches.

Keywords: text, polycode text, internet linguistics, text analysis, context, semiotics, sociolinguistics

Procedia PDF Downloads 129
2301 The Economic Burden of Mental Disorders: A Systematic Review

Authors: Maria Klitgaard Christensen, Carmen Lim, Sukanta Saha, Danielle Cannon, Finley Prentis, Oleguer Plana-Ripoll, Natalie Momen, Kim Moesgaard Iburg, John J. McGrath

Abstract:

Introduction: About a third of the world’s population will develop a mental disorder over their lifetime. Having a mental disorder is a huge burden in health loss and cost for the individual, but also for society because of treatment cost, production loss and caregivers’ cost. The objective of this study is to synthesize the international published literature on the economic burden of mental disorders. Methods: Systematic literature searches were conducted in the databases PubMed, Embase, Web of Science, EconLit, NHS York Database and PsychInfo using key terms for cost and mental disorders. Searches were restricted to 1980 until May 2019. The inclusion criteria were: (1) cost-of-illness studies or cost-analyses, (2) diagnosis of at least one mental disorder, (3) samples based on the general population, and (4) outcome in monetary units. 13,640 publications were screened by their title/abstract and 439 articles were full-text screened by at least two independent reviewers. 112 articles were included from the systematic searches and 31 articles from snowball searching, giving a total of 143 included articles. Results: Information about diagnosis, diagnostic criteria, sample size, age, sex, data sources, study perspective, study period, costing approach, cost categories, discount rate and production loss method and cost unit was extracted. The vast majority of the included studies were from Western countries and only a few from Africa and South America. The disorder group most often investigated was mood disorders, followed by schizophrenia and neurotic disorders. The disorder group least examined was intellectual disabilities, followed by eating disorders. The preliminary results show a substantial variety in the used perspective, methodology, costs components and outcomes in the included studies. An online tool is under development enabling the reader to explore the published information on costs by type of mental disorder, subgroups, country, methodology, and study quality. Discussion: This is the first systematic review synthesizing the economic cost of mental disorders worldwide. The paper will provide an important and comprehensive overview over the economic burden of mental disorders, and the output from this review will inform policymaking.

Keywords: cost-of-illness, health economics, mental disorders, systematic review

Procedia PDF Downloads 127
2300 Effect of Three Resistance Training Methods on Performance-Related Variables of Powerlifters

Authors: K. Shyamnath, K. Suresh Kutty

Abstract:

The purpose of the study was to find out the effect of three resistance training methods on performance-related variables of powerlifters. A total of forty male students (N=40) who had participated in Kannur University powerlifting championship were selected as subjects. The age group of the subjects ranged from 18 years old to 25 years old. The selected subjects were equally divided into four groups (n=10) of three experimental groups and a control group. The experimental Group I underwent traditional resistance training (TRTG), Group II underwent combined traditional resistance training and plyometrics (TRTPG), and Group III underwent combined traditional resistance training and resistance training with high rhythm (TRTHRG). Group IV acted as the control group (CG) receiving no training during the experimental period. The duration of the experimental period was sixteen weeks, five days per week. Powerlifting performance was assessed by the 1RM test in the squat, bench press and deadlift. Performance-related variables assessed were chest girth, arm girth, forearm girth, thigh girth, and calf girth. Pre-test and post-test were conducted a day before and two days after the experimental period on all groups. Analysis of covariance (ANCOVA) was applied to analyze the significant difference. The 0.05 level of confidence was fixed as the level of significance to test the F ratio obtained by the analysis of covariance. The result indicates that there is a significant effect of all the selected resistance training methods on the performance and selected performance-related variables of powerlifters. Combined traditional resistance training and plyometrics and combined traditional resistance training and resistance training with high rhythm proved better than the traditional resistance training in improving performance and selected performance-related variables of powerlifters. There was no significant difference between combined traditional resistance training and plyometrics and combined traditional resistance training and resistance training with high rhythm in improving performance and selected performance-related variables of powerlifters.

Keywords: girth, plyometrics, powerlifting, resistance training

Procedia PDF Downloads 487
2299 Combined Effect of Roughness and Suction on Heat Transfer in a Laminar Channel Flow

Authors: Marzieh Khezerloo, Lyazid Djenidi

Abstract:

Owing to wide range of the micro-device applications, the problems of mixing at small scales is of significant interest. Also, because most of the processes produce heat, it is needed to develop and implement strategies for heat removal in these devices. There are many studies which focus on the effect of roughness or suction on heat transfer performance, separately, although it would be useful to take advantage of these two methods to improve heat transfer performance. Unfortunately, there is a gap in this area. The present numerical study is carried to investigate the combined effects of roughness and wall suction on heat transfer performance of a laminar channel flow; suction is applied on the top and back faces of the roughness element, respectively. The study is carried out for different Reynolds numbers, different suction rates, and various locations of suction area on the roughness. The flow is assumed two dimensional, incompressible, laminar, and steady state. The governing Navier-Stokes equations are solved using ANSYS-Fluent 18.2 software. The present results are tested against previous theoretical results. The results show that by adding suction, the local Nusselt number is enhanced in the channel. In addition, it is shown that by applying suction on the bottom section of the roughness back face, one can reduce the thickness of thermal boundary layer, which leads to an increase in local Nusselt number. This indicates that suction is an effective means for improving the heat transfer rate (suction by controls the thickness of thermal boundary layer). It is also shown that the size and intensity of vortical motion behind the roughness element, decreased with an increasing suction rate, which leads to higher local Nusselt number. So, it can be concluded that by using suction, strategically located on the roughness element, one can control both the recirculation region and the heat transfer rate. Further results will be presented at the conference for coefficient of drag and the effect of adding more roughness elements.

Keywords: heat transfer, laminar flow, numerical simulation, roughness, suction

Procedia PDF Downloads 110
2298 Characteristics of Smoked Edible Film Made from Myofibril, Collagen and Carrageenan

Authors: Roike Iwan Montolalu, Henny Adeleida Dien, Feny Mentang, Kristhina P. Rahael, Tomy Moga, Ayub Meko, Siegfried Berhimpon

Abstract:

In the last 20 years, packaging materials derived from petrochemicals polymers were widely used as packaging materials. This due to various advantages such as flexible, strong, transparent, and the price is relatively cheap. However, the plastic polymer also has various disadvantages, such as the transmission monomer contamination into the material to be packed, and waste is non-biodegradable. Edible film (EF) is an up to date materials, generated after the biodegradable packaging materials. The advantages of the EF materials, is the materials can be eat together with food, and the materials can be applied as a coating materials for a widely kind of foods especially snack foods. The aims of this research are to produce and to analyze the characteristics of smoked EF made from carrageenan, myofibril and collagen of Black Marlin (Makaira indica) industrial waste. Smoked EF made with an addition of 0.8 % smoke liquid. Three biopolymers i.e. carrageenan, myofibril, and collagen were used as treatments, and homogenate for 1 hours at speed of 1500 rpm. The analysis carried out on the pH and physical properties i.e. thickness, solubility, tensile strength, % elongation, and water vapor transmission rate (WVTR), as well as on the sensory characteristics of texture i.e. wateriness, firmness, elasticity, hardness, and juiciness of the coated products. The result shown that the higher the concentration the higher the thickness of EF, where as for myofibril proteins appeared higher than carrageenan and collagen. Both of collagen and myofibril shown that concentration of 6% was most soluble, while for carrageenan were in concentration of 2 to 2.5%. For tensile strength, carrageenan was significantly higher than myofibril and collagen; while for elongation, collagen film more elastic than carragenan and myofibril protein. Water vapor transmission rate, shown that myofibril protein film lower than carrageenan and collagen film. From sensory assessment of texture, carrageenan has a high elasticity and juiciness, while collagen and myofibril have a high in firmness and hardness.

Keywords: edible film, collagen, myofibril, carrageenan

Procedia PDF Downloads 427
2297 Advantages of Utilizing Post-Tensioned Stress Ribbon Systems in Long Span Roofs

Authors: Samih Ahmed, Guayente Minchot, Fritz King, Mikael Hallgren

Abstract:

The stress ribbon system has numerous advantages that include but are not limited to increasing overall stiffness, control deflections, and reduction of materials consumption, which in turn, reduces the load and the cost. Nevertheless, its use is usually limited to bridges, in particular, pedestrian bridges; this can be attributed to the insufficient space that buildings' usually have for end supports, and/or back- stayed cables, that can accommodate the expected high pull-out forces occurring at the cables' ends. In this work, the roof of Västerås Travel Center, which will become one of the longest cable suspended roofs in the world, was chosen as a case study. The aim was to investigate the optimal technique to model the post-tensioned stress ribbon system for the roof structure using the FEM software SAP2000 and to assess any possible reduction in the pull-out forces, deflections, and concrete stresses. Subsequently, a conventional cable suspended roof was simulated using SAP2000, and compared to the post-tension stress ribbon system in order to examine the potential of the latter. Moreover, the effects of temperature loads and support movements on the final design loads were examined. Based on the study, a few practical recommendations concerning the construction method and the iterative design process, required to meet the architectural geometrical demands, are stated by the authors. The results showed that the post-tensioned stress ribbon system reduces the concrete stresses, overall deflections, and more importantly, reduces the pull-out forces and the vertical reactions at both ends by up to 16% and 11%, respectively, which substantially reduces the design forces for the support structures. The magnitude of these reductions was found to be highly correlated to the applied prestressing force, making the size of the prestressing force a key factor in the design.

Keywords: cable suspended, post-tension, roof structure, SAP2000, stress ribbon

Procedia PDF Downloads 156
2296 Validation of Nutritional Assessment Scores in Prediction of Mortality and Duration of Admission in Elderly, Hospitalized Patients: A Cross-Sectional Study

Authors: Christos Lampropoulos, Maria Konsta, Vicky Dradaki, Irini Dri, Konstantina Panouria, Tamta Sirbilatze, Ifigenia Apostolou, Vaggelis Lambas, Christina Kordali, Georgios Mavras

Abstract:

Objectives: Malnutrition in hospitalized patients is related to increased morbidity and mortality. The purpose of our study was to compare various nutritional scores in order to detect the most suitable one for assessing the nutritional status of elderly, hospitalized patients and correlate them with mortality and extension of admission duration, due to patients’ critical condition. Methods: Sample population included 150 patients (78 men, 72 women, mean age 80±8.2). Nutritional status was assessed by Mini Nutritional Assessment (MNA full, short-form), Malnutrition Universal Screening Tool (MUST) and short Nutritional Appetite Questionnaire (sNAQ). Sensitivity, specificity, positive and negative predictive values and ROC curves were assessed after adjustment for the cause of current admission, a known prognostic factor according to previously applied multivariate models. Primary endpoints were mortality (from admission until 6 months afterwards) and duration of hospitalization, compared to national guidelines for closed consolidated medical expenses. Results: Concerning mortality, MNA (short-form and full) and SNAQ had similar, low sensitivity (25.8%, 25.8% and 35.5% respectively) while MUST had higher sensitivity (48.4%). In contrast, all the questionnaires had high specificity (94%-97.5%). Short-form MNA and sNAQ had the best positive predictive value (72.7% and 78.6% respectively) whereas all the questionnaires had similar negative predictive value (83.2%-87.5%). MUST had the highest ROC curve (0.83) in contrast to the rest questionnaires (0.73-0.77). With regard to extension of admission duration, all four scores had relatively low sensitivity (48.7%-56.7%), specificity (68.4%-77.6%), positive predictive value (63.1%-69.6%), negative predictive value (61%-63%) and ROC curve (0.67-0.69). Conclusion: MUST questionnaire is more advantageous in predicting mortality due to its higher sensitivity and ROC curve. None of the nutritional scores is suitable for prediction of extended hospitalization.

Keywords: duration of admission, malnutrition, nutritional assessment scores, prognostic factors for mortality

Procedia PDF Downloads 341
2295 Queuing Analysis and Optimization of Public Vehicle Transport Stations: A Case of South West Ethiopia Region Vehicle Stations

Authors: Mequanint Birhan

Abstract:

Modern urban environments present a dynamically growing field where, notwithstanding shared goals, several mutually conflicting interests frequently collide. However, it has a big impact on the city's socioeconomic standing, waiting lines and queues are common occurrences. This results in extremely long lines for both vehicles and people on incongruous routes, service coagulation, customer murmuring, unhappiness, complaints, and looking for other options sometimes illegally. The root cause of this is corruption, which leads to traffic jams, stopping, and packing vehicles beyond their safe carrying capacity, and violating the human rights and freedoms of passengers. This study focused on the optimizing time of passengers had to wait in public vehicle stations. This applied research employed both data gathering sources and mixed approaches, then 166 samples of key informants of transport station were taken by using the Slovin sampling formula. The length of time vehicles, including the drivers and auxiliary drivers ‘Weyala', had to wait was also studied. To maximize the service level at vehicle stations, a queuing model was subsequently devised ‘Menaharya’. Time, cost, and quality encompass performance, scope, and suitability for the intended purposes. The minimal response time for passengers and vehicles queuing to reach their final destination at the stations of the Tepi, Mizan, and Bonga towns was determined. A new bus station system was modeled and simulated by Arena simulation software in the chosen study area. 84% improvement on cost reduced by 56.25%, time 4hr to 1.5hr, quality, safety and designed load performance calculations employed. Stakeholders are asked to put the model into practice and monitor the results obtained.

Keywords: Arena 14 automatic rockwell, queue, transport services, vehicle stations

Procedia PDF Downloads 72
2294 Collaborative Stylistic Group Project: A Drama Practical Analysis Application

Authors: Omnia F. Elkommos

Abstract:

In the course of teaching stylistics to undergraduate students of the Department of English Language and Literature, Faculty of Arts and Humanities, the linguistic tool kit of theories comes in handy and useful for the better understanding of the different literary genres: Poetry, drama, and short stories. In the present paper, a model of teaching of stylistics is compiled and suggested. It is a collaborative group project technique for use in the undergraduate diverse specialisms (Literature, Linguistics and Translation tracks) class. Students initially are introduced to the different linguistic tools and theories suitable for each literary genre. The second step is to apply these linguistic tools to texts. Students are required to watch videos performing the poems or play, for example, and search the net for interpretations of the texts by other authorities. They should be using a template (prepared by the researcher) that has guided questions leading students along in their analysis. Finally, a practical analysis would be written up using the practical analysis essay template (also prepared by the researcher). As per collaborative learning, all the steps include activities that are student-centered addressing differentiation and considering their three different specialisms. In the process of selecting the proper tools, the actual application and analysis discussion, students are given tasks that request their collaboration. They also work in small groups and the groups collaborate in seminars and group discussions. At the end of the course/module, students present their work also collaboratively and reflect and comment on their learning experience. The module/course uses a drama play that lends itself to the task: ‘The Bond’ by Amy Lowell and Robert Frost. The project results in an interpretation of its theme, characterization and plot. The linguistic tools are drawn from pragmatics, and discourse analysis among others.

Keywords: applied linguistic theories, collaborative learning, cooperative principle, discourse analysis, drama analysis, group project, online acting performance, pragmatics, speech act theory, stylistics, technology enhanced learning

Procedia PDF Downloads 171
2293 Investigation of Drought Resistance in Iranian Sesamum Germpelasm

Authors: Fatemeh Najafi

Abstract:

The major stress factor limiting crop growth and development of sesame (Sesamum indicum L.) is drought stress in arid and semiarid regions of the world. For this study the effects of water stress on some qualitative and quantitative traits in sesame germplasm was conducted in the Research Farm of Seed and Plant Improvement Institute, Karaj, in the crop year. Genotypes in a randomized complete block design with three replications in two environments (moisture stress and normal) were studied in regard of the seed weight, capsule weight, grain yield, biomass, plant height, number of capsules per plant, etc. The characteristics were evaluated based on the combined analysis. Irrigation was based on first class evaporation basin. After flowering stage drought stress was applied. The water deficit reduced growth period. Days to reach full ripening decreased so that the reduction was significant at the five percent level. Drought stress reduces yield and plant biomass. Genotypes based on combined analysis of these two traits were significant at the one percent level. Genotypes differ in terms of yield stress in terms of density plots, grain yield, days to first flowering and days to the half of the cap on the confidence level of five percent and traits of days to emergence of the first capsule and days to reach full ripening at the one percent level were significant. Other traits were not significant. The correlation of traits in circumstances of stress the number of seeds per capsule has the greatest impact on performance. The sensitivity and stress tolerance index was calculated. Based on the indicators, (Fars variety) and variety Karaj were identified as the most tolerant genotypes among the studied genotypes to drought stress. The highest sensitivity indicator of stress was related to genotype (FARS).

Keywords: sesamum, drought, stress, germplasm, resistance

Procedia PDF Downloads 68