Search results for: machine learning tools and techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16672

Search results for: machine learning tools and techniques

3592 TQM Framework Using Notable Authors Comparative

Authors: Redha M. Elhuni

Abstract:

This paper presents an analysis of the essential characteristics of the TQM philosophy by comparing the work of five notable authors in the field. A framework is produced which gather the identified TQM enablers under the well-known operations management dimensions of process, business and people. These enablers are linked with sustainable development via balance scorecard type economic and non-economic measures. In order to capture a picture of Libyan Company’s efforts to implement the TQM, a questionnaire survey is designed and implemented. Results of the survey are presented showing the main differentiating factors between the sample companies, and a way of assessing the difference between the theoretical underpinning and the practitioners’ undertakings. Survey results indicate that companies are experiencing much difficulty in translating TQM theory into practice. Only a few companies have successfully adopted a holistic approach to TQM philosophy, and most of these put relatively high emphasis on hard elements compared with soft issues of TQM. However, where companies can realize the economic outputs, non- economic benefits such as workflow management, skills development and team learning are not realized. In addition, overall, non-economic measures have secured low weightings compared with the economic measures. We believe that the framework presented in this paper can help a company to concentrate its TQM implementation efforts in terms of process, system and people management dimensions.

Keywords: TQM, balance scorecard, EFQM excellence model, oil sector, Libya

Procedia PDF Downloads 397
3591 Removal of Pharmaceuticals from Aquarius Solutions Using Hybrid Ceramic Membranes

Authors: Jenny Radeva, Anke-Gundula Roth, Christian Goebbert, Robert Niestroj-Pahl, Lars Daehne, Axel Wolfram, Juergen Wiese

Abstract:

The technological advantages of ceramic filtration elements were combined with polyelectrolyte films in the development process of hybrid membrane for the elimination of pharmaceuticals from Aquarius solutions. Previously extruded alumina ceramic membranes were coated with nanosized polyelectrolyte films using Layer-by-Layer technology. The polyelectrolyte chains form a network with nano-pores on the ceramic surface and promote the retention of small molecules like pharmaceuticals and microplastics, which cannot be eliminated using standard ultrafiltration methods. Additionally, the polyelectrolyte coat contributes with its adjustable (based on application) Zeta Potential for repulsion of contaminant molecules with opposite charges. Properties like permeability, bubble point, pore size distribution and Zeta Potential of ceramic and hybrid membranes were characterized using various laboratory and pilot tests and compared with each other. The most significant role for the membrane characterization played the filtration behavior investigation, during which retention against widely used pharmaceuticals like Diclofenac, Ibuprofen and Sulfamethoxazol was subjected to series of filtration tests. The presented study offers a new perspective on nanosized molecules removal from aqueous solutions and shows the importance of combined techniques application for the elimination of pharmaceutical contaminants from drinking water.

Keywords: water treatment, hybrid membranes, layer-by-layer coating, filtration, polyelectrolytes

Procedia PDF Downloads 160
3590 Effectiveness of Mobile Health Augmented Cardiac Rehabilitation (MCard) on Health-Related Quality of Life among Post-Acute Coronary Syndrome Patients: A Randomized Controlled Trial

Authors: Aliya Hisam, Zia Ul Haq, Sohail Aziz, Patrick Doherty, Jill Pell

Abstract:

Objective: To determine the effectiveness of Mobile health augmented Cardiac rehabilitation (MCard) on health-related quality of life (HRQoL) among post-acute coronary syndrome(post-ACS) patients. Methodology: In a randomized controlled trial, post-ACS patients were randomly allocated (1:1) to an intervention group (received MCard; counseling, empowering with self-monitoring devices, short text messages, in addition to standard post-ACS care) or control group (standard post-ACS care). HRQoL was assessed by generic Short Form-12 and MacNew quality of life myocardial infarction (QLMI) tools. Participants were followed for 24 weeks with data collection and analysis at three-time points (baseline, 12 weeks and 24 weeks). Result: At baseline, 160 patients (80 in each group; mean age 52.66+8.46 years; 126 males, 78.75%) were recruited, of which 121(75.62%) continued and were analyzed at 12-weeks and 119(74.37%) at 24-weeks. The mean SF-12 physical component score significantly improved in the MCard group at 12 weeks follow-up (48.93 vs. control 43.87, p<.001) and 24 weeks (53.52 vs. 46.82 p<.001). The mean SF-12 mental component scores also improved significantly in the MCard group at 12 weeks follow-up (44.84 vs. control 41.40, p<.001) and 24 weeks follow-up (48.95 vs 40.12, p<.001). At 12-and 24-week follow-up, all domains of MacNew QLMI (social, emotional, physical and global) were also statistically significant (p<.001) improved in the MCard group, unlike the control group. Conclusion: MCard is feasible and effective at improving all domains of HRQoL. There was an improvement in physical, mental, social, emotional and global domains among the MCard group in comparison to the control group. The addition of MCard programs to post-ACS standard care may improve patient outcomes and reduce the burden on the health care setting.

Keywords: acute coronary syndrome, mobile health augmented cardiac rehabilitation (MCard), cardiovascular diseases, cardiac rehabilitation, health-related quality of life, short form 12, MacNew QLMI

Procedia PDF Downloads 163
3589 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials

Authors: Rajesh Kumar G

Abstract:

A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.

Keywords: adaptive design, simulation, borrowing data, bayesian model

Procedia PDF Downloads 71
3588 The KAPSARC Energy Policy Database: Introducing a Quantified Library of China's Energy Policies

Authors: Philipp Galkin

Abstract:

Government policy is a critical factor in the understanding of energy markets. Regardless, it is rarely approached systematically from a research perspective. Gaining a precise understanding of what policies exist, their intended outcomes, geographical extent, duration, evolution, etc. would enable the research community to answer a variety of questions that, for now, are either oversimplified or ignored. Policy, on its surface, also seems a rather unstructured and qualitative undertaking. There may be quantitative components, but incorporating the concept of policy analysis into quantitative analysis remains a challenge. The KAPSARC Energy Policy Database (KEPD) is intended to address these two energy policy research limitations. Our approach is to represent policies within a quantitative library of the specific policy measures contained within a set of legal documents. Each of these measures is recorded into the database as a single entry characterized by a set of qualitative and quantitative attributes. Initially, we have focused on the major laws at the national level that regulate coal in China. However, KAPSARC is engaged in various efforts to apply this methodology to other energy policy domains. To ensure scalability and sustainability of our project, we are exploring semantic processing using automated computer algorithms. Automated coding can provide a more convenient input data for human coders and serve as a quality control option. Our initial findings suggest that the methodology utilized in KEPD could be applied to any set of energy policies. It also provides a convenient tool to facilitate understanding in the energy policy realm enabling the researcher to quickly identify, summarize, and digest policy documents and specific policy measures. The KEPD captures a wide range of information about each individual policy contained within a single policy document. This enables a variety of analyses, such as structural comparison of policy documents, tracing policy evolution, stakeholder analysis, and exploring interdependencies of policies and their attributes with exogenous datasets using statistical tools. The usability and broad range of research implications suggest a need for the continued expansion of the KEPD to encompass a larger scope of policy documents across geographies and energy sectors.

Keywords: China, energy policy, policy analysis, policy database

Procedia PDF Downloads 318
3587 Hindi Speech Synthesis by Concatenation of Recognized Hand Written Devnagri Script Using Support Vector Machines Classifier

Authors: Saurabh Farkya, Govinda Surampudi

Abstract:

Optical Character Recognition is one of the current major research areas. This paper is focussed on recognition of Devanagari script and its sound generation. This Paper consists of two parts. First, Optical Character Recognition of Devnagari handwritten Script. Second, speech synthesis of the recognized text. This paper shows an implementation of support vector machines for the purpose of Devnagari Script recognition. The Support Vector Machines was trained with Multi Domain features; Transform Domain and Spatial Domain or Structural Domain feature. Transform Domain includes the wavelet feature of the character. Structural Domain consists of Distance Profile feature and Gradient feature. The Segmentation of the text document has been done in 3 levels-Line Segmentation, Word Segmentation, and Character Segmentation. The pre-processing of the characters has been done with the help of various Morphological operations-Otsu's Algorithm, Erosion, Dilation, Filtration and Thinning techniques. The Algorithm was tested on the self-prepared database, a collection of various handwriting. Further, Unicode was used to convert recognized Devnagari text into understandable computer document. The document so obtained is an array of codes which was used to generate digitized text and to synthesize Hindi speech. Phonemes from the self-prepared database were used to generate the speech of the scanned document using concatenation technique.

Keywords: Character Recognition (OCR), Text to Speech (TTS), Support Vector Machines (SVM), Library of Support Vector Machines (LIBSVM)

Procedia PDF Downloads 488
3586 Obsession of Time and the New Musical Ontologies. The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe

Authors: Dutica Luminita

Abstract:

For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multi-vectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristic-timbral universe dominated by multiphonics and unique sound effects. Hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.

Keywords: archetype, chronogenesis, concert, multiphonics

Procedia PDF Downloads 540
3585 Evaluation of Traditional Housing Texture in Context of Sustainability

Authors: Esra Yaldız, Dicle Aydın

Abstract:

Sustainability is a term that provides deciding about the future considering environment and investigates the harmony and balance between protection and usage of the resource. The main objective of sustainability is creating residential areas is nature compatible or providing continuance thereby adapting existing residential area to nature. In this context, historical and traditional areas must have utilized according to sustainability. Traditional housing texture are identified as a traditional architectural product has been designed based on this term. General characteristics of traditional housing within the context of sustainable architecture are their specific dynamics and components and their harmonisation of environment and nature. Owing to the fact that traditional housing texture harmonizes natural conditions of the region, topography, climate and their context, construction materials are provided from environment and traditional techniques and their forms are used and due to construction materials has natural insulation traditional housing create healthy and comfortable living environment, traditional housing is rather significant in terms of sustainable architecture. The basis of this study comprise the routers in traditional housing design in accordance with the principles of sustainability. These are, accommodating topography, climate, and geography, accessibility, structuring at the scale of human, utilization of green zones, unique to the region used construction materials, the form of construction, building envelope and space organization of dwelling. In this context, the purpose of this study is that vernacular architecture approaches of traditional housing textures which are in Central Anatolia Region Located in Anatolia are utilized with regard to sustainability.

Keywords: Anatolia, sustainability, traditional housing texture, vernacular architecture

Procedia PDF Downloads 448
3584 Comparison of the Dose Reached to the Rectum and Bladder in Two Treatment Methods by Tandem and Ovoid and Tandem and Ring in the High Dose Rate Brachytherapy of Cervical Cancer

Authors: Akbar Haghzadeh Saraskanroud, Amir Hossein Yahyavi Zanjani, Niloofar Kargar, Hanieh Ahrabi

Abstract:

Cervical cancer refers to an unusual growth of cells in the cervix. The cervix is the lower part of the uterus, which connects to the vagina. Various risk factors such as human papillomavirus (HPV), having a weakened immune system, smoking or breathing in secondhand smoke, reproductive factors, and obesity play important roles in causing most cervical cancers. When cervical cancer happens, surgery is often the first treatment option to remove it. Other treatments might include chemotherapy and targeted therapy medicines. Radiation therapy with high-energy photon beams also may be used. Sometimes combined treatment, including radiation with low-dose chemotherapy, was applied. Intracavitary brachytherapy is an integral part of radiotherapy for locally advanced gynecologic malignancies such as cervical cancer. In the treatment of cervical cancer, there are different tools for doing brachytherapy. Two combinations of different applicators for this purpose are Tandem and Ovoid and Tandem and Ring. This study evaluated the dose differences between these two methods in the organs at risk of the rectum, sigmoid, and bladder. In this study, the treatment planswere simulated by the Oncentra treatment planning system and Tandem, Ovid, and Rings of different sizes. CT scan images of 23 patients were treated with HDR_BT Elekta Flexitron system were used for this study. Contouring of HR-CTV, rectum and bladder was performed for all patients. Then, the received dose of 0.1 and 0.2cc volumes of organs at risk were obtained and compared for these two methods: T-Ovoid and T-Ring. By doing investigations and dose measurements of points A and B and the volumes specified by ICRU, it seems that when comparing ring and ovoid to tandem and ovoid, the total dose to the rectum was lower by about 11%, and the bladder was 7%. In the case of HR CTV, this comparison showed that this ratio is about 7% better. Figure 1 shows the amount of decrease in rectum dose in the T-Ring method compared to T-Ovoid. Figure 2 indicates the amount of decrease in bladder dose in the T-Ring method compared to T-Ovoid. Finally, figure 3 illustrates the amount of HR-CTV coverage in the T-Ring method compared to the T-Ovoid.

Keywords: cervical cancer, brachytherapy, rectum, tandem and ovoid, tandem and ring.

Procedia PDF Downloads 34
3583 Developing Critical-Process Skills Integrated Assessment Instrument as Alternative Assessment on Electrolyte Solution Matter in Senior High School

Authors: Sri Rejeki Dwi Astuti, Suyanta

Abstract:

The demanding of the asessment in learning process was impact by policy changes. Nowadays, the assessment not only emphasizes knowledge, but also skills and attitude. However, in reality there are many obstacles in measuring them. This paper aimed to describe how to develop instrument of integrated assessment as alternative assessment to measure critical thinking skills and science process skills in electrolyte solution and to describe instrument’s characteristic such as logic validity and construct validity. This instrument development used test development model by McIntire. Development process data was acquired based on development test step and was analyzed by qualitative analysis. Initial product was observed by three peer reviewer and six expert judgment (two subject matter expert, two evaluation expert and two chemistry teacher) to acquire logic validity test. Logic validity test was analyzed using Aiken’s formula. The estimation of construct validity was analyzed by exploratory factor analysis. Result showed that integrated assessment instrument has 0,90 of Aiken’s Value and all item in integrated assessment asserted valid according to construct validity.

Keywords: construct validity, critical thinking skills, integrated assessment instrument, logic validity, science process skills

Procedia PDF Downloads 260
3582 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 295
3581 The Application of System Approach to Knowledge Management and Human Resource Management Evidence from Tehran Municipality

Authors: Vajhollah Ghorbanizadeh, Seyed Mohsen Asadi, Mirali Seyednaghavi, Davoud Hoseynpour

Abstract:

In the current era, all organizations need knowledge to be able to manage the diverse human resources. Creative, dynamic and knowledge-based Human resources are important competitive advantage and the scarcest resource in today's knowledge-based economy. In addition managers with skills of knowledge management must be aware of human resource management science. It is now generally accepted that successful implementation of knowledge management requires dynamic interaction between knowledge management and human resource management. This is emphasized at systematic approach to knowledge management as well. However human resource management can be complementary of knowledge management because human resources management with the aim of empowering human resources as the key resource organizations in the 21st century, the use of other resources, creating and growing and developing today. Thus, knowledge is the major capital of every organization which is introduced through the process of knowledge management. In this context, knowledge management is systematic approach to create, receive, organize, access, and use of knowledge and learning in the organization. This article aims to define and explain the concepts of knowledge management and human resource management and the importance of these processes and concepts. Literature related to knowledge management and human resource management as well as related topics were studied, then to design, illustrate and provide a theoretical model to explain the factors affecting the relationship between knowledge management and human resource management and knowledge management system approach, for schematic design and are drawn.

Keywords: systemic approach, human resources, knowledge, human resources management, knowledge management

Procedia PDF Downloads 368
3580 Using Presentation as a Means to Develop Communication Skills of Engineering Students

Authors: Urvashi Kaushal

Abstract:

With the entry of multinationals in India, engineering students of Indian universities have opportunity to work with the best and the most innovative industries in the world, but in order to compete in the global job market, they require an added competence of communication skills in English. With work places turning global, competence in English can provide the Indian student the added advantage to begin his/her career in the international market. The present method of teaching English in any engineering college across Gujarat mostly concentrates on developing writing, and reading skills. Developing speech becomes a secondary topic owing to the old trend of lecturing in the class room and the huge strength of the class. This paper aims to highlight the importance of improving speaking skills of engineering students. It also insists that presentations can be used as a viable method to enhance the communication skills of these students. Presentations force students to plan, prepare, practice and perfect their communication skills which will enable them to get a foothold in the industry. The paper also discusses one such experiment carried out at the author’s institute and the response it received. Further, such experimental language learning approach is bound to have some limitations and obstacles. The paper suggests ways to overcome such limitations and strives to develop an interesting means of developing communication skills of the engineering students.

Keywords: engineering, English, presentation, communication skills

Procedia PDF Downloads 437
3579 Evaluation of Microbial Accumulation of Household Wastewater Purified by Advanced Oxidation Process

Authors: Nazlı Çetindağ, Pelin Yılmaz Çetiner, Metin Mert İlgün, Emine Birci, Gizemnur Yıldız Uysal, Özcan Hatipoğlu, Ehsan Tuzcuoğlu, Gökhan Sır

Abstract:

Water scarcity is an unavoidable issue impacting an increasing number of individuals daily, representing a global crisis stemming from swift population growth, urbanization, and excessive resource exploitation. Consequently, solutions that involve the reclamation of wastewater are considered essential. In this context, household wastewater, categorized as greywater, plays a significant role in freshwater used for residential purposes and is attributed to washing. This type of wastewater comprises diverse elements, including organic substances, soaps, detergents, solvents, biological components, and inorganic elements such as certain metal ions and particles. The physical characteristics of wastewater vary depending on its source, whether commercial, domestic, or from a hospital setting. Consequently, the treatment strategy for this wastewater type necessitates comprehensive investigation and appropriate handling. The advanced oxidation process (AOP) emerges as a promising technique associated with the generation of reactive hydroxyl radicals highly effective in oxidizing organic pollutants. This method takes precedence over others like coagulation, flocculation, sedimentation, and filtration due to its avoidance of undesirable by-products. In the current study, the focus was on exploring the feasibility of the AOP for treating actual household wastewater. To achieve this, a laboratory-scale device was designed to effectively target the formed radicals toward organic pollutants, resulting in lower organic compounds in wastewater. Then, the number of microorganisms present in treated wastewater, in addition to the chemical content of the water, was analyzed to determine whether the lab-scale device eliminates microbial accumulation with AOP. This was also an important parameter since microbes can indirectly affect human health and machine hygiene. To do this, water samples were taken from treated and untreated conditions and then inoculated on general purpose agar to track down the total plate count. Analysis showed that AOP might be an option to treat household wastewater and lower microorganism growth.

Keywords: usage of household water, advanced oxidation process, water reuse, modelling

Procedia PDF Downloads 43
3578 Effects of Drying Method and Seed Priming Duration on Coffee Seed and Seedling Quality

Authors: Taju Mohammednur, Tesfaye Megersa, Karta Kaske

Abstract:

Coffee is an economically important cash crop in Ethiopia. However, the conditions under which coffee seeds are dried and processed significantly affect the seedling quality and productivity. The objective of this study was to evaluate the effect of pre-sowing treatments and drying methods on the physiological quality of coffee seeds and seedlings. The study included two coffee varieties (74110, 75227), two drying conditions (under-shade drying room, open sun), and five durations of seed hydro priming (6, 8, 18, 24 hours, and an untreated control). Factorial combinations of the three factors were laid out in a Completely Randomized Design of three replications. Results indicated that the highest germination percentage (91%), emergence rate (90%), and seedling vigor index-I (2236 cm %) were recorded for seeds dried under-shade drying room. In contrast, the lowest values of germination percentage, emergence rate, and vigor index were observed for seeds dried under open sun. There was a significant difference in seed germination based on hydro priming time, with the highest germination percentage (83%) recorded for seeds soaked for 6 hours, followed by 24 hours (83%). The lowest germination percentage (77%) was recorded for un-soaked seeds. In conclusion, drying seeds under shade is better for coffee seed quality, and hydro priming has improved seedling vigor. However, further investigation into seed priming methods and preservation techniques for primed seeds is necessary to improve coffee seed quality.

Keywords: coffee, germination, seed drying, seed longevity, seed priming

Procedia PDF Downloads 16
3577 Theology of Science and Technology as a Tool for Peace Education

Authors: Jonas Chikelue Ogbuefi

Abstract:

Science and Technology have a major impact on societal peace, it offers support to teaching and learning, cuts costs, and offers solutions to the current agitations and militancy in Nigeria today. Christianity, for instance, did not only change and form the western world in the past 2022 but still has a substantial role to play in society through liquid ecclesiology. This paper interrogated the impact of the theology of Science and Technology as a tool for peace sustainability through peace education in Nigeria. The method adopted is a historical and descriptive method of analysis. It was discovered that a larger number of Nigerian citizens lack almost all the basic things needed for the standard of living, such as Shelter, meaningful employment, and clothing, which is the root course of all agitations in Nigeria. Based on the above findings, the paper contends that the government alone cannot restore Peace in Nigeria. Hence the inability of the government to restore peace calls for all religious actors to be involved. The main thrust and recommendation of this paper are to challenge the religious actors to implement the Theology of Science and Technology as a tool for peace restoration and should network with both the government and the private sectors to make funds available to budding and existing entrepreneurs using Science and Technology as a tool for Peace and economic sustainability. This paper viewed the theology of Science and Technology as a tool for Peace and economic sustainability in Nigeria.

Keywords: theology, science, technology, peace education

Procedia PDF Downloads 79
3576 Structural Health Monitoring-Integrated Structural Reliability Based Decision Making

Authors: Caglayan Hizal, Kutay Yuceturk, Ertugrul Turker Uzun, Hasan Ceylan, Engin Aktas, Gursoy Turan

Abstract:

Monitoring concepts for structural systems have been investigated by researchers for decades since such tools are quite convenient to determine intervention planning of structures. Despite the considerable development in this regard, the efficient use of monitoring data in reliability assessment, and prediction models are still in need of improvement in their efficiency. More specifically, reliability-based seismic risk assessment of engineering structures may play a crucial role in the post-earthquake decision-making process for the structures. After an earthquake, professionals could identify heavily damaged structures based on visual observations. Among these, it is hard to identify the ones with minimum signs of damages, even if they would experience considerable structural degradation. Besides, visual observations are open to human interpretations, which make the decision process controversial, and thus, less reliable. In this context, when a continuous monitoring system has been previously installed on the corresponding structure, this decision process might be completed rapidly and with higher confidence by means of the observed data. At this stage, the Structural Health Monitoring (SHM) procedure has an important role since it can make it possible to estimate the system reliability based on a recursively updated mathematical model. Therefore, integrating an SHM procedure into the reliability assessment process comes forward as an important challenge due to the arising uncertainties for the updated model in case of the environmental, material and earthquake induced changes. In this context, this study presents a case study on SHM-integrated reliability assessment of the continuously monitored progressively damaged systems. The objective of this study is to get instant feedback on the current state of the structure after an extreme event, such as earthquakes, by involving the observed data rather than the visual inspections. Thus, the decision-making process after such an event can be carried out on a rational basis. In the near future, this can give wing to the design of self-reported structures which can warn about its current situation after an extreme event.

Keywords: condition assessment, vibration-based SHM, reliability analysis, seismic risk assessment

Procedia PDF Downloads 139
3575 Museums: The Roles of Lighting in Design

Authors: Fernanda S. Oliveira

Abstract:

The architectural science of lighting has been mainly concerned with technical aspects and has tended to ignore the psychophysical. There is a growing evidence that adopting passive design solutions may contribute to higher satisfaction. This is even more important in countries with higher solar radiation, which should take advantage of favourable daylighting conditions. However, in art museums, the same light that stimulates vision can also cause permanent damage to the exhibits. Not only the visitors want to see the objects, but also to understand their nature and the artist’s intentions. This paper examines the hypothesis that the more varied and exciting the lighting (and particularly the daylight) in museums rooms, over space and time, the more likely it is that visitors will stay longer, enjoy their experience and be willing to return. This question is not often considered in museums that privilege artificial lighting neglecting the various qualities of daylight other than its capacity to illuminate spaces. The findings of this paper show that daylight plays an important role in museum design, affecting how visitors perceive the exhibition space, as well as contributing to their overall enjoyment in the museum. Rooms with high luminance means were considered more pleasant (r=.311, p<.05) and cheerful (r=.349, p<.05). Lighting conditions also have a direct effect on the phenomenon of museum fatigue with the overall room quality showing an effect on how tired visitors reported to be (r=.421, p<.01). The control and distribution of daylight in museums can therefore contribute to create pleasant conditions for learning, entertainment and amusement, so that visitors are willing to return.

Keywords: daylight, comfort, museums, luminance, visitor

Procedia PDF Downloads 477
3574 Islamic Credit Risk Management in Murabahah Financing: The Study of Islamic Banking in Malaysia

Authors: Siti Nor Amira Bt. Mohamad, Mohamad Yazis B. Ali Basah, Muhammad Ridhwan B. Ab. Aziz, Khairil Faizal B. Khairi, Mazlynda Bt. Md. Yusuf, Hisham B. Sabri

Abstract:

The understanding of risk and the concept of it occurs associated in Islamic financing was well-known in the financial industry by the using of Profit-and-Loss Sharing (PLS). It was presently in any Islamic financial transactions in order to comply with shariah rules. However, the existence of risk in Murabahah contract of financing is an ability that the counterparty is unable to complete its obligations within the agreed terms. Therefore, it is called as credit or default risk. Credit risk occurs when the client fails to make timely payment after the bank makes complete delivery of assets. Thus, it affects the growth of the bank as the banking business is in no position to have appropriate measures to cover the risk. Therefore, the bank may impose penalty on the outstanding balance. This paper aims to highlight the credit risk determinant and issues surrounding in Islamic bank in Malaysia in terms of Murabahah financing and how to manage it by using the proper techniques. Finally, it explores the credit risk management concept that might solve the problems arise. The study found that the credit risk can be managed properly by improving the use of comprehensive reference checklist of business partners on their character and past performance as well as their comprehensive database. Besides that, prevention of credit risk can be done by using collateral as security against the risk and we also argue on the Shariah guidelines and procedures should be implement coherently by the banking business because so that the risk would be control by having an effective instrument for Islamic modes of financing.

Keywords: Islamic banking, credit risk, Murabahah financing, risk mitigation

Procedia PDF Downloads 449
3573 Monitoring Memories by Using Brain Imaging

Authors: Deniz Erçelen, Özlem Selcuk Bozkurt

Abstract:

The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.

Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons

Procedia PDF Downloads 77
3572 Corrective Feedback and Uptake Patterns in English Speaking Lessons at Hanoi Law University

Authors: Nhac Thanh Huong

Abstract:

New teaching methods have led to the changes in the teachers’ roles in an English class, in which teachers’ error correction is an integral part. Language error and corrective feedback have been the interest of many researchers in foreign language teaching. However, the techniques and the effectiveness of teachers’ feedback have been a question of much controversy. This present case study has been carried out with a view to finding out the patterns of teachers’ corrective feedback and their impact on students’ uptake in English speaking lessons of legal English major students at Hanoi Law University. In order to achieve those aims, the study makes use of classroom observations as the main method of data collection to seeks answers to the two following questions: 1. What patterns of corrective feedback occur in English speaking lessons for second- year legal English major students in Hanoi Law University?; 2. To what extent does that corrective feedback lead to students’ uptake? The study provided some important findings, among which was a close relationship between corrective feedback and uptake. In particular, recast was the most commonly used feedback type, yet it was the least effective in terms of students’ uptake and repair, while the most successful feedback, namely meta-linguistic feedback, clarification requests and elicitation, which led to students’ generated repair, was used at a much lower rate by teachers. Furthermore, it revealed that different types of errors needed different types of feedback. Also, the use of feedback depended on the students’ English proficiency level. In the light of findings, a number of pedagogical implications have been drawn in the hope of enhancing the effectiveness of teachers’ corrective feedback to students’ uptake in foreign language acquisition process.

Keywords: corrective feedback, error, uptake, speaking English lesson

Procedia PDF Downloads 253
3571 Down-Regulated Gene Expression of GKN1 and GKN2 as Diagnostic Markers for Gastric Cancer

Authors: Amer A. Hasan, Mehri Igci, Ersin Borazan, Rozhgar A. Khailany, Emine Bayraktar, Ahmet Arslan

Abstract:

Gastric cancer (GC) has high morbidity and fatality rate in various countries and is still one of the most frequent and deadly diseases. Novel mitogenic and motogenic Gastrokine1 (GKN1) and Gastrokine 2 (GKN2) genes that are highly expressed in the normal stomach epithelium and plays an important role in maintaining the integrity and homeostasis of stomach mucosal epithelial cells. Significant loss of copy number and mRNA transcript of GKN1 and GKN2 gene expression were frequently observed in all types of gastric cancer. In this study, 47 paired samples that were grouped according to the types of gastric cancer and the clinical characteristics of the patients, including gender and average of age were investigated with gene expression analysis and mutation screening by monetering RT-PCR, SSCP and nucleotide sequencing techniques. Both GKN1 and GKN2 genes were observed significantly reduced found by (Wilcoxon signed rank test; p<0.05). As a result of gene screening, no mutation (no different genotype) was detected. It is considered that gene mutations are not the cause of inactivation of gastrokines. In conclusion, the mRNA expression level of GKN1 and GKN2 genes statistically was decreased regardless the gender, age or cancer type of patients. Reduced of gastrokine genes seems to occur at the initial steps of cancer development. In order to understand the investigation between gastric cancer and diagnostic biomarker; further analysis is necessary.

Keywords: gastric cancer, diagnostic biomarker, nucleotide sequencing, semi-quantitative RT-PCR

Procedia PDF Downloads 471
3570 Particle Size Dependent Enhancement of Compressive Strength and Carbonation Efficiency in Steel Slag Cementitious Composites

Authors: Jason Ting Jing Cheng, Lee Foo Wei, Yew Ming Kun, Chin Ren Jie, Yip Chun Chieh

Abstract:

The utilization of industrial by-products, such as steel slag in cementitious materials, not only mitigates environmental impact but also enhances material properties. This study investigates the dual influence of steel slag particle size on the compressive strength and carbonation efficiency of cementitious composites. Through a systematic experimental approach, steel slag particles were incorporated into cement at varying sizes, and the resulting composites were subjected to mechanical and carbonation tests. Scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) are conducted in this paper. The findings reveal a positive correlation between increased particle size and compressive strength, attributed to the improved interfacial transition zone and packing density. Conversely, smaller particle sizes exhibited enhanced carbonation efficiency, likely due to the increased surface area facilitating the carbonation reaction. The presence of higher silica and calcium content in finer particles was confirmed by EDX, which contributed to the accelerated carbonation process. This study underscores the importance of particle size optimization in designing sustainable cementitious materials with balanced mechanical performance and carbon sequestration potential. The insights gained from the advanced analytical techniques offer a comprehensive understanding of the mechanisms at play, paving the way for the strategic use of steel slag in eco-friendly construction practices.

Keywords: steel slag, carbonation efficiency, particle size enhancement, compressive strength

Procedia PDF Downloads 56
3569 Nitrogen/Platinum Co-Doped TiO₂ for Enhanced Visible Light Photocatalytic Degradation of Brilliant Black

Authors: Sarre Nzaba, Bulelwa Ntsendwana, Bekkie Mamba, Alex Kuvarega

Abstract:

Elimination of toxic organic compounds from wastewater is currently one of the most important subjects in water pollution control. The discharge of azo dyes such as Brilliant black (BB) into the water bodies has carcinogenic and mutagenic effects on humankind and the ecosystem. Conventional water treatment techniques fail to degrade these dyes completely thereby posing more problems. Advanced oxidation processes (AOPs) are promising technologies in solving the problem. Anatase type nitrogen-platinum (N,Pt) co-doped TiO₂ photocatalyts were prepared by a modified sol-gel method using amine terminated polyamidoamine generation 1 (PG1) as a template and source of nitrogen. SEM/ EDX, TEM, XRD, XPS, TGA, FTIR, RS, PL and UV-Vis were used to characterize the prepared nanomaterials. The synthesized photocatalysts exhibited lower band gap energies as compared to the commercial TiO₂ revealing a shift in band gap towards the visible light absorption region. Photocatalytic activity of N,Pt co-doped TiO₂ was measured by the reaction of photocatalytic degradation of BB dye. Enhanced photodegradation efficiency of BB was achieved after 180 min reaction time with initial concentration of 50 ppm BB solution. This was attributed to the rod-like shape of the materials, larger surface area, and enhanced absorption of visible light induced by N,Pt co-doping. The co-doped N,Pt also exhibited pseudo-first order kinetic behaviour with half-life and rate constant of 0.37 min 0.1984 min⁻¹ and respectively. N doped TiO₂ and N,Pt co-doped TiO₂ exhibited enhanced photocatalytic performances for the removal of BB from water.

Keywords: N, Pt co-doped TiO₂, dendrimer, photodegradation, visible-light

Procedia PDF Downloads 165
3568 Surface Roughness Prediction Using Numerical Scheme and Adaptive Control

Authors: Michael K.O. Ayomoh, Khaled A. Abou-El-Hossein., Sameh F.M. Ghobashy

Abstract:

This paper proposes a numerical modelling scheme for surface roughness prediction. The approach is premised on the use of 3D difference analysis method enhanced with the use of feedback control loop where a set of adaptive weights are generated. The surface roughness values utilized in this paper were adapted from [1]. Their experiments were carried out using S55C high carbon steel. A comparison was further carried out between the proposed technique and those utilized in [1]. The experimental design has three cutting parameters namely: depth of cut, feed rate and cutting speed with twenty-seven experimental sample-space. The simulation trials conducted using Matlab software is of two sub-classes namely: prediction of the surface roughness readings for the non-boundary cutting combinations (NBCC) with the aid of the known surface roughness readings of the boundary cutting combinations (BCC). The following simulation involved the use of the predicted outputs from the NBCC to recover the surface roughness readings for the boundary cutting combinations (BCC). The simulation trial for the NBCC attained a state of total stability in the 7th iteration i.e. a point where the actual and desired roughness readings are equal such that error is minimized to zero by using a set of dynamic weights generated in every following simulation trial. A comparative study among the three methods showed that the proposed difference analysis technique with adaptive weight from feedback control, produced a much accurate output as against the abductive and regression analysis techniques presented in this.

Keywords: Difference Analysis, Surface Roughness; Mesh- Analysis, Feedback control, Adaptive weight, Boundary Element

Procedia PDF Downloads 618
3567 Development and Evaluation of Naringenin Nanosuspension to Improve Antioxidant Potential

Authors: Md. Shadab, Mariyam N. Nashid, Venkata Srikanth Meka, Thiagarajan Madheswaran

Abstract:

Naringenin (NAR), is a naturally occurring plant flavonoid, found predominantly in citrus fruits, that possesses a wide range of pharmacological properties including anti-oxidant, anti-inflammatory behaviour, cholesterol-lowering and anticarcinogenic activities. However, despite the therapeutic potential of naringenin shown in a number of animal models, its clinical development has been hindered due to its low aqueous solubility, slow dissolution rate and inefficient transport across biological membranes resulting in low bioavailability. Naringenin nanosuspension were produced using stabilizers Tween® 80 by high pressure homogenization techniques. The nanosuspensions were characterized with regard to size (photon correlation spectroscopy (PCS), size distribution, charge (zeta potential measurements), morphology, short term physical stability, dissolution profile and antioxidant potential. A nanocrystal PCS size of about 500 nm was obtained after 20 homogenization cycles at 1500 bar. The short-term stability was assessed by storage of the nanosuspensions at 4 ◦C, room temperature and 40 ◦C. Result showed that naringenin nanosuspension was physically unstable due to large fluctuations in the particle size and zeta potential after 30 days. Naringenin nanosuspension demonstrated higher drug dissolution (97.90%) compared to naringenin powder (62.76%) after 120 minutes of testing. Naringenin nanosuspension showed increased antioxidant activity compared to naringenin powder with a percentage DPPH radical scavenging activity of 49.17% and 31.45% respectively at the lowest DPPH concentration.

Keywords: bioavailability, naringenin, nanosuspension, oral delivery

Procedia PDF Downloads 324
3566 The Competitiveness of Small and Medium Sized Enterprises: Digital Transformation of Business Models

Authors: Chante Van Tonder, Bart Bossink, Chris Schachtebeck, Cecile Nieuwenhuizen

Abstract:

Small and Medium-Sized Enterprises (SMEs) play a key role in national economies around the world, being contributors to economic and social well-being. Due to this, the success, growth and competitiveness of SMEs are critical. However, there are many factors that undermine this, such as resource constraints, poor information communication infrastructure (ICT), skills shortages and poor management. The Fourth Industrial Revolution offers new tools and opportunities such as digital transformation and business model innovation (BMI) to the SME sector to enhance its competitiveness. Adopting and leveraging digital technologies such as cloud, mobile technologies, big data and analytics can significantly improve business efficiencies, value proposition and customer experiences. Digital transformation can contribute to the growth and competitiveness of SMEs. However, SMEs are lagging behind in the participation of digital transformation. Extant research lacks conceptual and empirical research on how digital transformation drives BMI and the impact it has on the growth and competitiveness of SMEs. The purpose of the study is, therefore, to close this gap by developing and empirically validating a conceptual model to determine if SMEs are achieving BMI through digital transformation and how this is impacting the growth, competitiveness and overall business performance. An empirical study is being conducted on 300 SMEs, consisting of 150 South-African and 150 Dutch SMEs, to achieve this purpose. Structural equation modeling is used, since it is a multivariate statistical analysis technique that is used to analyse structural relationships and is a suitable research method to test the hypotheses in the model. Empirical research is needed to gather more insight into how and if SMEs are digitally transformed and how BMI can be driven through digital transformation. The findings of this study can be used by SME business owners, managers and employees at all levels. The findings will indicate if digital transformation can indeed impact the growth, competitiveness and overall performance of an SME, reiterating the importance and potential benefits of adopting digital technologies. In addition, the findings will also exhibit how BMI can be achieved in light of digital transformation. This study contributes to the body of knowledge in a highly relevant and important topic in management studies by analysing the impact of digital transformation on BMI on a large number of SMEs that are distinctly different in economic and cultural factors

Keywords: business models, business model innovation, digital transformation, SMEs

Procedia PDF Downloads 232
3565 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: relative permeabilty, porosity, 1-D black oil simulator, capillary pressures

Procedia PDF Downloads 439
3564 Effectiveness of Lowering the Water Table as a Mitigation Measure for Foundation Settlement in Liquefiable Soils Using 1-g Scale Shake Table Test

Authors: Kausar Alam, Mohammad Yazdi, Peiman Zogh, Ramin Motamed

Abstract:

An earthquake is an unpredictable natural disaster. It induces liquefaction, which causes considerable damage to the structure, life support, and piping systems because of ground settlement. As a result, people are incredibly concerned about how to resolve the situation. Previous researchers adopted different ground improvement techniques to reduce the settlement of the structure during earthquakes. This study evaluates the effectiveness of lowering the water table as a technique to mitigate foundation settlement in liquefiable soil. The performance will be evaluated based on foundation settlement and the reduction of excessive pore water pressure. In this study, a scaled model was prepared based on a full-scale shale table experiment conducted at the University of California, San Diego (UCSD). The model ground consists of three soil layers having a relative density of 55%, 45%, and 90%, respectively. A shallow foundation is seated over an unsaturated crust layer. After preparation of the model ground, the water table was measured to be at 45, 40, and 35 cm (from the bottom). Then, the input motions were applied for 10 seconds, with a peak acceleration of 0.25g and a constant frequency of 2.73 Hz. Based on the experimental results, the effectiveness of the lowering water table in reducing the foundation settlement and excess pore water pressure was evident. The foundation settlement was reduced from 50 mm to 5 mm. In addition, lowering the water table as a mitigation measure is a cost-effective way to decrease liquefaction-induced building settlement.

Keywords: foundation settlement, ground water table, liquefaction, hake table test

Procedia PDF Downloads 110
3563 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 207