Search results for: CSF implementation approaches
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2841

Search results for: CSF implementation approaches

321 The Implementation of Anti-Circumvention Legislations in Thai Copyright System

Authors: Chuencheewin Yimfuang

Abstract:

The WIPO copyright treaty (WCT) was established by the World Intellectual Property Organisation (WIPO). This agreement required the contracting nations to provide adequate protection to technological measures to prevent massive copyright infringement in the internet system. Thailand had to implement the anti-circumvention rules into domestic legislation to comply with this international obligation. The purpose of this paper is to critically discuss the legislative standard under the WCT. It also aims to examine the legal development of technological protection measures in Thailand and demonstrate that the scope of prohibitions under the copyright Act 2022 (NO.5) is similar to the Digital Millennium Copyright Act 1998 (DMCA) of the United States (US). It could be found that the anti-circumvention laws of Thailand prohibit the circumvention of access-control technologies, and the regulation on trafficking circumvention devices has been added to the latest version of the Thai Copyright Act. These legislative evolutions have revealed the attempt to reinforce the legal protection of technological measures and copyright holders in order to be in line with global practices. However, the amendment has problems concerning the legal definitions of effective technological measure and the prohibited act of circumvention. The vagueness might affect the scope of protection and the boundary of prohibition. With this aspect, the DMCA will be evaluated and compared to gain guidelines for interpretation and enforcement in Thailand. The lessons and experiences learned from this study might be useful to correct the flaws or at least clarify the ambiguities embodied in Thai copyright legislation.

Keywords: Legal Development Technological Protection Measure, prohibition, circumvention, Thailand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 129
320 Advanced Stochastic Models for Partially Developed Speckle

Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije

Abstract:

Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.

Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
319 Factors Affecting Access to Education: The Experiences of Parents of Children Who Are Deaf or Hard of Hearing

Authors: Hanh Thi My Nguyen

Abstract:

The purpose of this research is to examine the experiences of parents of children who are deaf or hard of hearing in supporting their children to access education in Vietnam. Parents play a crucial role in supporting their children to gain full access to education. It was widely reported that parents of those children confronted a range of problems to support their children to access education. To author’s best knowledge, there has been a lack of research exploring the experiences of those parents in literature. This research examines factors affecting those parents in supporting their children to access education. To conduct the study, qualitative approach using a phenomenological research design was chosen to explore the central phenomena. Ten parents of children who were diagnosed as deaf or hard of hearing and aged 6-9 years were recruited through the support of the Association of Parents of Children with Hearing Impairment. Participants were interviewed via telephone with a mix of open and closed questions; interviews were audio recorded, transcribed and thematically analysed. The research results show that there are nine main factors that affected the parents in this study in making decisions relating to education for their children including: lack of information resources, perspectives of those parents on communication approaches, the families’ financial capacity, the psychological impact on the participants after their children’ diagnosis, the attitude of family members, attitude of school administrators, lack of local schools and qualified teachers, and current education system for the deaf in Vietnam. Apart from those factors, the lack of knowledge of the participants’ partners about deaf education and the partners’ employment are barriers to educational access and successful communication with their child.

Keywords: Access to education, deaf, hard of hearing, parents experience.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1348
318 Data Mining for Cancer Management in Egypt Case Study: Childhood Acute Lymphoblastic Leukemia

Authors: Nevine M. Labib, Michael N. Malek

Abstract:

Data Mining aims at discovering knowledge out of data and presenting it in a form that is easily comprehensible to humans. One of the useful applications in Egypt is the Cancer management, especially the management of Acute Lymphoblastic Leukemia or ALL, which is the most common type of cancer in children. This paper discusses the process of designing a prototype that can help in the management of childhood ALL, which has a great significance in the health care field. Besides, it has a social impact on decreasing the rate of infection in children in Egypt. It also provides valubale information about the distribution and segmentation of ALL in Egypt, which may be linked to the possible risk factors. Undirected Knowledge Discovery is used since, in the case of this research project, there is no target field as the data provided is mainly subjective. This is done in order to quantify the subjective variables. Therefore, the computer will be asked to identify significant patterns in the provided medical data about ALL. This may be achieved through collecting the data necessary for the system, determimng the data mining technique to be used for the system, and choosing the most suitable implementation tool for the domain. The research makes use of a data mining tool, Clementine, so as to apply Decision Trees technique. We feed it with data extracted from real-life cases taken from specialized Cancer Institutes. Relevant medical cases details such as patient medical history and diagnosis are analyzed, classified, and clustered in order to improve the disease management.

Keywords: Data Mining, Decision Trees, Knowledge Discovery, Leukemia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2182
317 A Refined Application of QFD in SCM, A New Approach

Authors: Nooshin La'l Mohamadi

Abstract:

Due to the fact that in the new century customers tend to express globally increasing demands, networks of interconnected businesses have been established in societies and the management of such networks seems to be a major key through gaining competitive advantages. Supply chain management encompasses such managerial activities. Within a supply chain, a critical role is played by quality. QFD is a widely-utilized tool which serves the purpose of not only bringing quality to the ultimate provision of products or service packages required by the end customer or the retailer, but it can also initiate us into a satisfactory relationship with our initial customer; that is the wholesaler. However, the wholesalers- cooperation is considerably based on the capabilities that are heavily dependent on their locations and existing circumstances. Therefore, it is undeniable that for all companies each wholesaler possesses a specific importance ratio which can heavily influence the figures calculated in the House of Quality in QFD. Moreover, due to the competitiveness of the marketplace today, it-s been widely recognized that consumers- expression of demands has been highly volatile in periods of production. Apparently, such instability and proneness to change has been very tangibly noticed and taking it into account during the analysis of HOQ is widely influential and doubtlessly required. For a more reliable outcome in such matters, this article demonstrates the application viability of Analytic Network Process for considering the wholesalers- reputation and simultaneously introduces a mortality coefficient for the reliability and stability of the consumers- expressed demands in course of time. Following to this, the paper provides further elaboration on the relevant contributory factors and approaches through the calculation of such coefficients. In the end, the article concludes that an empirical application is needed to achieve broader validity.

Keywords: Analytic Network Process, Quality Function Deployment, QFD flaws, Supply Chain Management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
316 Orbit Determination Modeling with Graphical Demonstration

Authors: Assem M. F. Sallam, Ah. El-S. Makled

Abstract:

In this paper, there is an implementation, verification, and graphical demonstration of a software application, which can be used swiftly over different preliminary orbit determination methods. A passive orbit determination method is used in this study to determine the location of a satellite or a flying body. It is named a passive orbit determination because it depends on observation without the use of any aids (radio and laser) installed on satellite. In order to understand how these methods work and how their output is accurate when compared with available verification data, the built models help in knowing the different inputs used with each method. Output from the different orbit determination methods (Gibbs, Lambert, and Gauss) will be compared with each other and verified by the data obtained from Satellite Tool Kit (STK) application. A modified model including all of the orbit determination methods using the same input will be introduced to investigate different models output (orbital parameters) for the same input (azimuth, elevation, and time). Simulation software is implemented using MATLAB. A Graphical User Interface (GUI) application named OrDet is produced using the GUI of MATLAB. It includes all the available used inputs and it outputs the current Classical Orbital Elements (COE) of satellite under observation. Produced COE are then used to propagate for a complete revolution and plotted on a 3-D view. Modified model which uses an adapter to allow same input parameters, passes these parameters to the preliminary orbit determination methods under study. Result from all orbit determination methods yield exactly the same COE output, which shows the equality of concept in determination of satellite’s location, but with different numerical methods.

Keywords: Orbit determination, STK, MATLAB-GUI, satellite tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
315 Component Based Framework for Authoring and Multimedia Training in Mathematics

Authors: Ion Smeureanu, Marian Dardala, Adriana Reveiu

Abstract:

The new programming technologies allow for the creation of components which can be automatically or manually assembled to reach a new experience in knowledge understanding and mastering or in getting skills for a specific knowledge area. The project proposes an interactive framework that permits the creation, combination and utilization of components that are specific to mathematical training in high schools. The main framework-s objectives are: • authoring lessons by the teacher or the students; all they need are simple operating skills for Equation Editor (or something similar, or Latex); the rest are just drag & drop operations, inserting data into a grid, or navigating through menus • allowing sonorous presentations of mathematical texts and solving hints (easier understood by the students) • offering graphical representations of a mathematical function edited in Equation • storing of learning objects in a database • storing of predefined lessons (efficient for expressions and commands, the rest being calculations; allows a high compression) • viewing and/or modifying predefined lessons, according to the curricula The whole thing is focused on a mathematical expressions minicompiler, storing the code that will be later used for different purposes (tables, graphics, and optimisations). Programming technologies used. A Visual C# .NET implementation is proposed. New and innovative digital learning objects for mathematics will be developed; they are capable to interpret, contextualize and react depending on the architecture where they are assembled.

Keywords: Adaptor, automatic assembly learning component and user control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
314 Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches

Authors: Shilpy Sharma

Abstract:

As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.

Keywords: Search engines; machine learning, Informationretrieval, Active logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2059
313 Foreign Direct Investment on Economic Growth by Industries in Central and Eastern European Countries

Authors: Shorena Pharjiani

Abstract:

Present empirical paper investigates the relationship between FDI and economic growth by 10 selected industries in 10 Central and Eastern European countries from the period 1995 to 2012. Different estimation approaches were used to explore the connection between FDI and economic growth, for example OLS, RE, FE with and without time dummies. Obtained empirical results leads to some main consequences: First, the Central and East European countries (CEEC) attracted foreign direct investment, which raised the productivity of industries they entered in. It should be concluded that the linkage between FDI and output growth by industries is positive and significant enough to suggest that foreign firm’s participation enhanced the productivity of the industries they occupied. There had been an endogeneity problem in the regression and fixed effects estimation approach was used which partially corrected the regression analysis in order to make the results less biased. Second, it should be stressed that the results show that time has an important role in making FDI operational for enhancing output growth by industries via total factor productivity. Third, R&D positively affected economic growth and at the same time, it should take some time for research and development to influence economic growth. Fourth, the general trends masked crucial differences at the country level: over the last 20 years, the analysis of the tables and figures at the country level show that the main recipients of FDI of the 11 Central and Eastern European countries were Hungary, Poland and the Czech Republic. The main reason was that these countries had more open door policies for attracting the FDI. Fifth, according to the graphical analysis, while Hungary had the highest FDI inflow in this region, it was not reflected in the GDP growth as much as in other Central and Eastern European countries.

Keywords: Central and East European countries (CEEC), economic growth, FDI, panel data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
312 Operation Planning of Concrete Box Girder Bridge by 4D CAD Visualization Techniques

Authors: Mohammad Rohani, Gholamali Shafabakhsh, Abdolhosein Haddad, Ehsan Asnaashari

Abstract:

Visual simulation has emerged as a key planning tool in built environment because it enables architects, engineers and project managers to visualize construction process evolution before the project actual commences. This provides an efficient technology for reducing time and cost through planning and controlling resources, machines and materials. With the development of infrastructure projects and the massive civil constructions such as bridges, urban tunnels and highways as well as sensitivity of their construction operations, it is very necessary to apply proper planning methods. Implementation of visual techniques into management of construction projects can provide a fundamental foundation for projects with massive activities and duplicate items. So, the purpose of this paper is to develop visual simulation management techniques for infrastructure projects such as highways bridges by the use of Four-Dimensional Computer-Aided design Models. This project simulates operational assembly-line for Box-Girder Concrete Bridges which it would be able to optimize the sequence and interaction of project activities and on the other hand, it would minimize any unintended conflicts prior to project start. In this paper, after introducing the various planning methods by building information model and concrete bridges in highways, an executive case study is demonstrated and then a visual technique (4D CAD) will be applied for the case. In the final step, the user feedback for interacting by this system evaluated according to six criteria.

Keywords: 4D application area, Box-Girder concrete bridges, CAD model, visual planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
311 Large Scale Production of Polyhydroxyalkanoates (PHAs) from Wastewater: A Study of Techno-Economics, Energy Use and Greenhouse Gas Emissions

Authors: Cora Fernandez Dacosta, John A. Posada, Andrea Ramirez

Abstract:

The biodegradable family of polymers polyhydroxyalkanoates is an interesting substitute for convectional fossil-based plastics. However, the manufacturing and environmental impacts associated with their production via intracellular bacterial fermentation are strongly dependent on the raw material used and on energy consumption during the extraction process, limiting their potential for commercialization. Industrial wastewater is studied in this paper as a promising alternative feedstock for waste valorization. Based on results from laboratory and pilot-scale experiments, a conceptual process design, techno-economic analysis and life cycle assessment are developed for the large-scale production of the most common type of polyhydroxyalkanoate, polyhydroxbutyrate. Intracellular polyhydroxybutyrate is obtained via fermentation of microbial community present in industrial wastewater and the downstream processing is based on chemical digestion with surfactant and hypochlorite. The economic potential and environmental performance results help identifying bottlenecks and best opportunities to scale-up the process prior to industrial implementation. The outcome of this research indicates that the fermentation of wastewater towards PHB presents advantages compared to traditional PHAs production from sugars because the null environmental burdens and financial costs of the raw material in the bioplastic production process. Nevertheless, process optimization is still required to compete with the petrochemicals counterparts.

Keywords: Circular economy, life cycle assessment, polyhydroxyalkanoates, waste valorization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4146
310 Addressing Global Trauma: Somatic Interventions in PTSD Treatment and Clinician Burnout Prevention

Authors: Nina Kaufmans

Abstract:

Traditional treatments for post-traumatic stress disorder (PTSD) that rely primarily on oral narratives are partially insufficient to prevent PTSD symptoms from recurrence. As a result of the global COVID-19 pandemic, war conflicts, and economic crises, a rising proportion of users of mental health services express somatically based distress in addition to their existing mental health symptoms. Furthermore, the rapid increase in demand for mental health services has resulted in substantial burnout among mental health professionals, which may further impact the quality of services provided and the sustainability of professional life-work balance. This article examines the implications of current developments and challenges in mental health services demand and subsequent responses, as well as the effects of those responses on mental health professionals. The article examines the neurobiological mechanisms underlying traumatic experiences, then discusses the premises for "bottom-up," or somatically oriented, psychotherapy approaches, and concludes with suggestions for clinical skills and interventions to be used by practitioners who work with clients diagnosed with PTSD. In addition, we examine how somatically based psychotherapy interventions performed in sessions might reduce clinician burnout and improve their well-being. We examine how incorporating somatically based therapies into counseling will boost the efficacy of mental health recovery and maintain remission while providing mental health practitioners with chances for self-care.

Keywords: Somatic psychotherapy interventions, trauma counseling, preventing and treating burnout, adults with PTSD, bottom-up skills, the effectiveness of trauma treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134
309 Systematic Examination of Methods Supporting the Social Innovation Process

Authors: Mariann Veresne Somosi, Zoltan Nagy, Krisztina Varga

Abstract:

Innovation is the key element of economic development and a key factor in social processes. Technical innovations can be identified as prerequisites and causes of social change and cannot be created without the renewal of society. The study of social innovation can be characterised as one of the significant research areas of our day. The study’s aim is to identify the process of social innovation, which can be defined by input, transformation, and output factors. This approach divides the social innovation process into three parts: situation analysis, implementation, follow-up. The methods associated with each stage of the process are illustrated by the chronological line of social innovation. In this study, we have sought to present methodologies that support long- and short-term decision-making that is easy to apply, have different complementary content, and are well visualised for different user groups. When applying the methods, the reference objects are different: county, district, settlement, specific organisation. The solution proposed by the study supports the development of a methodological combination adapted to different situations. Having reviewed metric and conceptualisation issues, we wanted to develop a methodological combination along with a change management logic suitable for structured support to the generation of social innovation in the case of a locality or a specific organisation. In addition to a theoretical summary, in the second part of the study, we want to give a non-exhaustive picture of the two counties located in the north-eastern part of Hungary through specific analyses and case descriptions.

Keywords: Factors of social innovation, methodological combination, social innovation process, supporting decision-making.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610
308 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility

Authors: Ali Hamadi Dicko, Nicolas Tong-Yette, Benjamin Gilles, François Faure, Olivier Palombi

Abstract:

A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.

Keywords: Hybrid, modeling, fast simulation, lumbar spine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2352
307 Evaluation of the Mechanical Behavior of a Retaining Wall Structure on a Weathered Soil through Probabilistic Methods

Authors: P. V. S. Mascarenhas, B. C. P. Albuquerque, D. J. F. Campos, L. L. Almeida, V. R. Domingues, L. C. S. M. Ozelim

Abstract:

Retaining slope structures are increasingly considered in geotechnical engineering projects due to extensive urban cities growth. These kinds of engineering constructions may present instabilities over the time and may require reinforcement or even rebuilding of the structure. In this context, statistical analysis is an important tool for decision making regarding retaining structures. This study approaches the failure probability of the construction of a retaining wall over the debris of an old and collapsed one. The new solution’s extension length will be of approximately 350 m and will be located over the margins of the Lake Paranoá, Brasilia, in the capital of Brazil. The building process must also account for the utilization of the ruins as a caisson. A series of in situ and laboratory experiments defined local soil strength parameters. A Standard Penetration Test (SPT) defined the in situ soil stratigraphy. Also, the parameters obtained were verified using soil data from a collection of masters and doctoral works from the University of Brasília, which is similar to the local soil. Initial studies show that the concrete wall is the proper solution for this case, taking into account the technical, economic and deterministic analysis. On the other hand, in order to better analyze the statistical significance of the factor-of-safety factors obtained, a Monte Carlo analysis was performed for the concrete wall and two more initial solutions. A comparison between the statistical and risk results generated for the different solutions indicated that a Gabion solution would better fit the financial and technical feasibility of the project.

Keywords: Economical analysis, probability of failure, retaining walls, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991
306 Implementation of a “DIVA“ Concept withspecific Elisa Kits; When Subunit H5 Avian Influenza Vaccine is used

Authors: Robles F, Uribe A, Guadarrama A , Castellanos L, González C.

Abstract:

The main objective of this study was to demonstrate that differentiation of infected and vaccinated animals (DIVA) strategy using different ELISA tests is possible when a subunit vaccine (Haemagglutinin protein) is used to prevent Avian influenza. Special emphasis was placed on the differentiation in the serological response to different components of the AIV (Nucleoprotein, Neuraminidase, Haemagglutinin, Nucleocapsid) between chickens that were vaccinated with a whole virus kill vaccine and recombinant vaccine. Furthermore, the potential use of this DIVA strategy using ELISA assays to detect Neuraminidase 1 (N1) was analyzed as strategy in countries where the field virus is H5N1 and the vaccine used is formulated with H5N2. Detection of AIV-s antibodies to any component in serum was negative for all animals on the study days 0-13. At study day 14 the titers of antibodies against Nucleoprotein (NP) and Nucleocapsid (NC) rose in the experimental groups vaccinated with Volvac® AI KV and were negatives during all the trial in the experimental groups vaccinated with a subunit H5; significant statistically differences were observed between these groups (p < 0.05). The seroconversion either Haemagglutinin or Neuraminidase was evident after 21 days post-vaccination in the experimental groups vaccinated with the respective viral fraction. Regarding the main aim of this study and according with the results that were obtained, use a combination of different ELISA test as a DIVA strategy is feasible when the vaccination is carry out with a subunit H5 vaccine. Also is possible to use the ELISA kit to detect Neuraminidase (either N1 or N2) as a DIVA concept in countries where H5N1 is present and the vaccination programs are done with H5N2 vaccine.

Keywords: Avian Influenza Virus, "DIVA concept", ELISAassay, subunit H5 vaccine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2281
305 Environmental Management of the Tanning Industry's Supply Chain: An Integration Model from Lean Supply Chain, Green Supply Chain, Cleaner Production and ISO 14001:2004

Authors: N. Clavijo Buriticá, L. M. Correa Lópezand J. R., Sánchez Rodríguez

Abstract:

The environmental impact caused by industries is an issue that, in the last 20 years, has become very important in terms of society, economics and politics in Colombia. Particularly, the tannery process is extremely polluting because of uneffective treatments and regulations given to the dumping process and atmospheric emissions. Considering that, this investigation is intended to propose a management model based on the integration of Lean Supply Chain, Green Supply Chain, Cleaner Production and ISO 14001-2004, that prioritizes the strategic components of the organizations. As a result, a management model will be obtained and it will provide a strategic perspective through a systemic approach to the tanning process. This will be achieved through the use of Multicriteria Decision tools, along with Quality Function Deployment and Fuzzy Logic. The strategic approach that embraces the management model using the alignment of Lean Supply Chain, Green Supply Chain, Cleaner Production and ISO 14001-2004, is an integrated perspective that allows a gradual frame of the tactical and operative elements through the correct setting of the information flow, improving the decision making process. In that way, Small Medium Enterprises (SMEs) could improve their productivity, competitiveness and as an added value, the minimization of the environmental impact. This improvement is expected to be controlled through a Dashboard that helps the Organization measure its performance along the implementation of the model in its productive process.

Keywords: Integration, environmental impact, management, systemic organization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2003
304 EZW Coding System with Artificial Neural Networks

Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar

Abstract:

Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.

Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
303 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study

Authors: Atif Zafar, Fan Haijun

Abstract:

A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.

Keywords: Field development, reservoir characterization, reservoir engineering, well test analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1083
302 Characterisation of Wind-Driven Ventilation in Complex Terrain Conditions

Authors: Daniel Micallef, Damien Bounaudet, Robert N. Farrugia, Simon P. Borg, Vincent Buhagiar, Tonio Sant

Abstract:

The physical effects of upstream flow obstructions such as vegetation on cross-ventilation phenomena of a building are important for issues such as indoor thermal comfort. Modelling such effects in Computational Fluid Dynamics simulations may also be challenging. The aim of this work is to establish the cross-ventilation jet behaviour in such complex terrain conditions as well as to provide guidelines on the implementation of CFD numerical simulations in order to model complex terrain features such as vegetation in an efficient manner. The methodology consists of onsite measurements on a test cell coupled with numerical simulations. It was found that the cross-ventilation flow is highly turbulent despite the very low velocities encountered internally within the test cells. While no direct measurement of the jet direction was made, the measurements indicate that flow tends to be reversed from the leeward to the windward side. Modelling such a phenomenon proves challenging and is strongly influenced by how vegetation is modelled. A solid vegetation tends to predict better the direction and magnitude of the flow than a porous vegetation approach. A simplified terrain model was also shown to provide good comparisons with observation. The findings have important implications on the study of cross-ventilation in complex terrain conditions since the flow direction does not remain trivial, as with the traditional isolated building case.

Keywords: Complex terrain, cross-ventilation, wind driven ventilation, Computational Fluid Dynamics (CFD), wind resource.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 865
301 Revisiting Domestication and Foreignisation Methods: Translating the Quran by the Hybrid Approach

Authors: Aladdin Al-Tarawneh

Abstract:

The Quran, as it is the sacred book of Islam and considered the literal word of God (Allah) in Arabic, is highly translated into many languages; however, the foreignising or the literal approach excessively stains the quality and discredits the final product in the eyes of its receptors. Such an approach fails to capture the intended meaning of the Quran and to communicate it in any language. Therefore, this study is conducted to propose a different approach that seeks involving other ones according to a hybrid model. Indeed, this study challenges the binary adherence that is highly used in Translation Studies (TS) in general and in the translation of the Quran in particular. Drawing on the genuine fact that the Quran can be communicated in any language in terms of meaning, and the translation is not sacred; this paper approaches the translation of the Quran by blending different methods like domestication or foreignisation in a systematic way, avoiding the binary choice made by many translators. To reach this aim, the paper has a conceptual part that seeks to elucidate and clarify the main methods employed in TS, and criticise and modify them to propose the new hybrid approach (the hybrid model) for translating the Quran – that is, the deductive method. To support and validate the outcome of the previous part, a comparative model is employed in order to highlight the differences between the suggested translation and other widely used ones – that is, the inductive method. By applying this methodology, the paper proves that there is a deficiency of communicating the original meaning of the Quran in light of the foreignising approach. In conclusion, the paper suggests producing a Quran translation has to take into account the adoption of many techniques to express the meaning of the Quran as understood in the original, and to offer this understanding in English in the most native-like manner to serve the intended target readers.

Keywords: Quran translation, hybrid approach, domestication, foreignisation, hybrid model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1149
300 A Short Survey of Integrating Urban Agriculture and Environmental Planning

Authors: Rayeheh Khatami, Toktam Hanaei, Mohammad Reza Mansouri Daneshvar

Abstract:

The growth of the agricultural sector is known as an essential way to achieve development goals in developing countries. Urban agriculture is a way to reduce the vulnerability of urban populations of the world toward global environmental change. It is a sustainable and efficient system to respond to the environmental, social and economic needs of the city, which leads to urban sustainability. Today, many local and national governments are developing urban agriculture as an effective tool in responding to challenges such as poverty, food security, and environmental problems. In this study, we follow a perspective based on urban agriculture literature in order to indicate the urban agriculture’s benefits in environmental planning strategies in non-western countries like Iran. The methodological approach adopted is based on qualitative approach and documentary studies. A total of 35 articles (mixed quantitative and qualitative methods studies) were studied in final analysis, which are published in relevant journals that focus on this subject. Studies show the wide range of positive benefits of urban agriculture on food security, nutrition outcomes, health outcomes, environmental outcomes, and social capital. However, there was no definitive conclusion about the negative effects of urban agriculture. This paper provides a conceptual and theoretical basis to know about urban agriculture and its roles in environmental planning, and also conclude the benefits of urban agriculture for researchers, practitioners, and policymakers who seek to create spaces in cities for implementation urban agriculture in future.

Keywords: Urban agriculture, environmental planning, urban planning, literature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
299 Fuzzy Optimization in Metabolic Systems

Authors: Feng-Sheng Wang, Wu-Hsiung Wu, Kai-Cheng Hsu

Abstract:

The optimization of biological systems, which is a branch of metabolic engineering, has generated a lot of industrial and academic interest for a long time. In the last decade, metabolic engineering approaches based on mathematical optimizations have been used extensively for the analysis and manipulation of metabolic networks. In practical optimization of metabolic reaction networks, designers have to manage the nature of uncertainty resulting from qualitative characters of metabolic reactions, e.g., the possibility of enzyme effects. A deterministic approach does not give an adequate representation for metabolic reaction networks with uncertain characters. Fuzzy optimization formulations can be applied to cope with this problem. A fuzzy multi-objective optimization problem can be introduced for finding the optimal engineering interventions on metabolic network systems considering the resilience phenomenon and cell viability constraints. The accuracy of optimization results depends heavily on the development of essential kinetic models of metabolic networks. Kinetic models can quantitatively capture the experimentally observed regulation data of metabolic systems and are often used to find the optimal manipulation of external inputs. To address the issues of optimizing the regulatory structure of metabolic networks, it is necessary to consider qualitative effects, e.g., the resilience phenomena and cell viability constraints. Combining the qualitative and quantitative descriptions for metabolic networks makes it possible to design a viable strain and accurately predict the maximum possible flux rates of desired products. Considering the resilience phenomena in metabolic networks can improve the predictions of gene intervention and maximum synthesis rates in metabolic engineering. Two case studies will present in the conference to illustrate the phenomena.

Keywords: Fuzzy multi-objective optimization problem, kinetic model, metabolic engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
298 Leadership Styles in the Hotel Sector and Its Effect on Employees’ Creativity and Organizational Commitment

Authors: Hatem Radwan Ibrahim Radwan

Abstract:

Leadership is crucial for hotel survival and success. It enables hotels to develop and compete effectively. This research intends to explore the implementation of six leadership styles by frontline hotel managers in four star hotels in Cairo and assess its impact on employees’ creativity and organizational commitment. The leadership patterns considered in this study includes: democratic, autocratic, laissez-faire, transformational, transactional, and ethical leaderships. Questionnaire was used as a research method to gather data. A structured survey was established and distributed on employees in Cairo’s four star hotels. A total of 284 questionnaire forms were returned and usable for statistical analysis. The results of this study identified that transactional and autocratic leadership were the prevalent styles used in four star hotels in Cairo. Two leadership styles proved to have significant high correlation and impact on employees’ creativity and organizational commitment including: transformational and democratic leadership. Besides, laissez-faire leadership was found had a smaller effect on employees’ creativity and ethical leadership had a lesser influence on employees’ commitment. The autocratic leadership had strong negative correlation and significant impact on both dependent variables. This research concludes that frontline hotel managers should adopt transformational and/or democratic leadership style in managing their subordinates.

Keywords: Creativity, hotels, leadership styles, organizational commitment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7434
297 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti

Abstract:

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Keywords: 4D modelling, Black Sea, maritime archaeology, underwater photogrammetry, Bronze Age, low visibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
296 Topping Failure Analysis of Anti-Dip Bedding Rock Slopes Subjected to Crest Loads

Authors: Chaoyi Sun, Congxin Chen, Yun Zheng, Kaizong Xia, Wei Zhang

Abstract:

Crest loads are often encountered in hydropower, highway, open-pit and other engineering rock slopes. Toppling failure is one of the most common deformation failure types of anti-dip bedding rock slopes. Analysis on such failure of anti-dip bedding rock slopes subjected to crest loads has an important influence on engineering practice. Based on the step-by-step analysis approach proposed by Goodman and Bray, a geo-mechanical model was developed, and the related analysis approach was proposed for the toppling failure of anti-dip bedding rock slopes subjected to crest loads. Using the transfer coefficient method, a formulation was derived for calculating the residual thrust of slope toe and the support force required to meet the requirements of the slope stability under crest loads, which provided a scientific reference to design and support for such slopes. Through slope examples, the influence of crest loads on the residual thrust and sliding ratio coefficient was investigated for cases of different block widths and slope cut angles. The results show that there exists a critical block width for such slope. The influence of crest loads on the residual thrust is non-negligible when the block thickness is smaller than the critical value. Moreover, the influence of crest loads on the slope stability increases with the slope cut angle and the sliding ratio coefficient of anti-dip bedding rock slopes increases with the crest loads. Finally, the theoretical solutions and numerical simulations using Universal Distinct Element Code (UDEC) were compared, in which the consistent results show the applicability of both approaches.

Keywords: Anti-dip slopes, crest loads, stability analysis, toppling failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 874
295 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: Deep learning, data mining, gender predication, MOOCs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1325
294 On the Parameter Optimization of Fuzzy Inference Systems

Authors: Erika Martinez Ramirez, Rene V. Mayorga

Abstract:

Nowadays, more engineering systems are using some kind of Artificial Intelligence (AI) for the development of their processes. Some well-known AI techniques include artificial neural nets, fuzzy inference systems, and neuro-fuzzy inference systems among others. Furthermore, many decision-making applications base their intelligent processes on Fuzzy Logic; due to the Fuzzy Inference Systems (FIS) capability to deal with problems that are based on user knowledge and experience. Also, knowing that users have a wide variety of distinctiveness, and generally, provide uncertain data, this information can be used and properly processed by a FIS. To properly consider uncertainty and inexact system input values, FIS normally use Membership Functions (MF) that represent a degree of user satisfaction on certain conditions and/or constraints. In order to define the parameters of the MFs, the knowledge from experts in the field is very important. This knowledge defines the MF shape to process the user inputs and through fuzzy reasoning and inference mechanisms, the FIS can provide an “appropriate" output. However an important issue immediately arises: How can it be assured that the obtained output is the optimum solution? How can it be guaranteed that each MF has an optimum shape? A viable solution to these questions is through the MFs parameter optimization. In this Paper a novel parameter optimization process is presented. The process for FIS parameter optimization consists of the five simple steps that can be easily realized off-line. Here the proposed process of FIS parameter optimization it is demonstrated by its implementation on an Intelligent Interface section dealing with the on-line customization / personalization of internet portals applied to E-commerce.

Keywords: Artificial Intelligence, Fuzzy Logic, Fuzzy InferenceSystems, Nonlinear Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
293 Realistic Simulation Methodology in Brazil’s New Medical Education Curriculum: Potentialities

Authors: Cleto J. Sauer Jr

Abstract:

Introduction: Brazil’s new national curriculum guidelines (NCG) for medical education were published in 2014, presenting active learning methodologies as a cornerstone. Simulation was initially applied for aviation pilots’ training and is currently applied in health sciences. The high-fidelity simulator replicates human body anatomy in detail, also reproducing physiological functions and its use is increasing in medical schools. Realistic Simulation (RS) has pedagogical aspects that are aligned with Brazil’s NCG teaching concepts. The main objective of this study is to carry on a narrative review on RS’s aspects that are aligned with Brazil’s new NCG teaching concepts. Methodology: A narrative review was conducted, with search in three databases (PubMed, Embase and BVS) of studies published between 2010 and 2020. Results: After systematized search, 49 studies were selected and divided into four thematic groups. RS is aligned with new Brazilian medical curriculum as it is an active learning methodology, providing greater patient safety, uniform teaching, and student's emotional skills enhancement. RS is based on reflective learning, a teaching concept developed for adult’s education. Conclusion: RS is a methodology aligned with NCG teaching concepts and has potential to assist in the implementation of new Brazilian medical school’s curriculum. It is an immersive and interactive methodology, which provides reflective learning in a safe environment for students and patients.

Keywords: Curriculum, high-fidelity simulator, medical education, realistic simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 524
292 Improved Dynamic Bayesian Networks Applied to Arabic on Line Characters Recognition

Authors: Redouane Tlemsani, Abdelkader Benyettou

Abstract:

Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology.

This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data.

Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables.

In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization.

The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724