Search results for: fixed effects model
270 Reducing Defects through Organizational Learning within a Housing Association Environment
Authors: T. Hopkin, S. Lu, P. Rogers, M. Sexton
Abstract:
Housing Associations (HAs) contribute circa 20% of the UK’s housing supply. HAs are however under increasing pressure as a result of funding cuts and rent reductions. Due to the increased pressure, a number of processes are currently being reviewed by HAs, especially how they manage and learn from defects. Learning from defects is considered a useful approach to achieving defect reduction within the UK housebuilding industry. This paper contributes to our understanding of how HAs learn from defects by undertaking an initial round table discussion with key HA stakeholders as part of an ongoing collaborative research project with the National House Building Council (NHBC) to better understand how house builders and HAs learn from defects to reduce their prevalence. The initial discussion shows that defect information runs through a number of groups, both internal and external of a HA during both the defects management process and organizational learning (OL) process. Furthermore, HAs are reliant on capturing and recording defect data as the foundation for the OL process. During the OL process defect data analysis is the primary enabler to recognizing a need for a change to organizational routines. When a need for change has been recognized, new options are typically pursued to design out defects via updates to a HAs Employer’s Requirements. Proposed solutions are selected by a review board and committed to organizational routine. After implementing a change, both structured and unstructured feedback is sought to establish the change’s success. The findings from the HA discussion demonstrates that OL can achieve defect reduction within the house building sector in the UK. The paper concludes by outlining a potential ‘learning from defects model’ for the housebuilding industry as well as describing future work.
Keywords: Defects, new homes, housing associations, organizational learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897269 The Cloud Systems Used in Education: Properties and Overview
Authors: Agah Tuğrul Korucu, Handan Atun
Abstract:
Diversity and usefulness of information that used in education are have increased due to development of technology. Web technologies have made enormous contributions to the distance learning system especially. Mobile systems, one of the most widely used technology in distance education, made much easier to access web technologies. Not bounding by space and time, individuals have had the opportunity to access the information on web. In addition to this, the storage of educational information and resources and accessing these information and resources is crucial for both students and teachers. Because of this importance, development and dissemination of web technologies supply ease of access to information and resources are provided by web technologies. Dynamic web technologies introduced as new technologies that enable sharing and reuse of information, resource or applications via the Internet and bring websites into expandable platforms are commonly known as Web 2.0 technologies. Cloud systems are one of the dynamic web technologies that defined as a model provides approaching the demanded information independent from time and space in appropriate circumstances and developed by NIST. One of the most important advantages of cloud systems is meeting the requirements of users directly on the web regardless of hardware, software, and dealing with install. Hence, this study aims at using cloud services in education and investigating the services provided by the cloud computing. Survey method has been used as research method. In the findings of this research the fact that cloud systems are used such studies as resource sharing, collaborative work, assignment submission and feedback, developing project in the field of education, and also, it is revealed that cloud systems have plenty of significant advantages in terms of facilitating teaching activities and the interaction between teacher, student and environment.
Keywords: Cloud systems, cloud systems in education, distance learning, e-learning, integration of information technologies, online learning environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018268 Knowledge Transfer among Cross-Functional Teams as a Continual Improvement Process
Authors: Sergio Mauricio Pérez López, Luis Rodrigo Valencia Pérez, Juan Manuel Peña Aguilar, Adelina Morita Alexander
Abstract:
The culture of continuous improvement in organizations is very important as it represents a source of competitive advantage. This article discusses the transfer of knowledge between companies which formed cross-functional teams and used a dynamic model for knowledge creation as a framework. In addition, the article discusses the structure of cognitive assets in companies and the concept of "stickiness" (which is defined as an obstacle to the transfer of knowledge). The purpose of this analysis is to show that an improvement in the attitude of individual members of an organization creates opportunities, and that an exchange of information and knowledge leads to generating continuous improvements in the company as a whole. This article also discusses the importance of creating the proper conditions for sharing tacit knowledge. By narrowing gaps between people, mutual trust can be created and thus contribute to an increase in sharing. The concept of adapting knowledge to new environments will be highlighted, as it is essential for companies to translate and modify information so that such information can fit the context of receiving organizations. Adaptation will ensure that the transfer process is carried out smoothly by preventing "stickiness". When developing the transfer process on cross-functional teams (as opposed to working groups), the team acquires the flexibility and responsiveness necessary to meet objectives. These types of cross-functional teams also generate synergy due to the array of different work backgrounds of their individuals. When synergy is established, a culture of continuous improvement is created.Keywords: Knowledge transfer, continuous improvement, teamwork, cognitive assets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698267 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1024266 Control of Vibrations in Flexible Smart Structures using Fast Output Sampling Feedback Technique
Authors: T.C. Manjunath, B. Bandyopadhyay
Abstract:
This paper features the modeling and design of a Fast Output Sampling (FOS) Feedback control technique for the Active Vibration Control (AVC) of a smart flexible aluminium cantilever beam for a Single Input Single Output (SISO) case. Controllers are designed for the beam by bonding patches of piezoelectric layer as sensor / actuator to the master structure at different locations along the length of the beam by retaining the first 2 dominant vibratory modes. The entire structure is modeled in state space form using the concept of piezoelectric theory, Euler-Bernoulli beam theory, Finite Element Method (FEM) and the state space techniques by dividing the structure into 3, 4, 5 finite elements, thus giving rise to three types of systems, viz., system 1 (beam divided into 3 finite elements), system 2 (4 finite elements), system 3 (5 finite elements). The effect of placing the sensor / actuator at various locations along the length of the beam for all the 3 types of systems considered is observed and the conclusions are drawn for the best performance and for the smallest magnitude of the control input required to control the vibrations of the beam. Simulations are performed in MATLAB. The open loop responses, closed loop responses and the tip displacements with and without the controller are obtained and the performance of the proposed smart system is evaluated for vibration control.Keywords: Smart structure, Finite element method, State spacemodel, Euler-Bernoulli theory, SISO model, Fast output sampling, Vibration control, LMI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1820265 The Micro Ecosystem Restoration Mechanism Applied for Feasible Research of Lakes Eutrophication Enhancement
Authors: Ching-Tsan Tsai, Sih-Rong Chen, Chi-Hung Hsieh
Abstract:
The technique of inducing micro ecosystem restoration is one of aquatic ecology engineering methods used to retrieve the polluted water. Batch scale study, pilot plant study, and field study were carried out to observe the eutrophication using the Inducing Ecology Restorative Symbiosis Agent (IERSA) consisting mainly degraded products by using lactobacillus, saccharomycete, and phycomycete. The results obtained from the experiments of the batch scale and pilot plant study allowed us to development the parameters for the field study. A pond, 5 m to the outlet of a lake, with an area of 500 m2 and depth of 0.6-1.2 m containing about 500 tons of water was selected as a model. After the treatment with 10 mg IERSA/L water twice a week for 70 days, the micro restoration mechanisms consisted of three stages (i.e., restoration, impact maintenance, and ecology recovery experiment after impact). The COD, TN, TKN, and chlorophyll a were reduced significantly in the first week. Although the unexpected heavy rain and contaminate from sewage system might slow the ecology restoration. However, the self-cleaning function continued and the chlorophyll a reduced for 50% in one month. In the 4th week, amoeba, paramecium, rotifer, and red wriggle worm reappeared, and the number of fish flies appeared up to1000 fish fries/m3. Those results proved that inducing restorative mechanism can be applied to improve the eutrophication and to control the growth of algae in the lakes by gaining the selfcleaning through inducing and competition of microbes. The situation for growth of fishes also can reach an excellent result due to the improvement of water quality.Keywords: Ecosystem restoration, eutrophication, lake.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872264 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community
Authors: Mohamed Ghorab
Abstract:
Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.
Keywords: Distributed energy resources, network energy system, optimization, microgeneration system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940263 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach
Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan
Abstract:
Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.
Keywords: Efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 846262 Food Security in the Middle East and North Africa
Authors: Sara D. Garduño-Diaz, Philippe Y. Garduño-Diaz
Abstract:
To date, one of the few comprehensive indicators for the measurement of food security is the Global Food Security Index (GFSI). This index is a dynamic quantitative and qualitative benchmarking model, constructed from 28 unique indicators, that measures drivers of food security across both developing and developed countries. Whereas the GFSI has been calculated across a set of 109 countries, in this paper we aim to present and compare, for the Middle East and North Africa (MENA), 1) the Food Security Index scores achieved and 2) the data available on affordability, availability, and quality of food. The data for this work was taken from the latest available report published by the creators of the GFSI, which in turn used information from national and international statistical sources. MENA countries rank from place 17/109 (Israel, although with resent political turmoil this is likely to have changed) to place 91/109 (Yemen) with household expenditure spent in food ranging from 15.5% (Israel) to 60% (Egypt). Lower spending on food as a share of household consumption in most countries and better food safety net programs in the MENA have contributed to a notable increase in food affordability. The region has also, however, experienced a decline in food availability, owing to more limited food supplies and higher volatility of agricultural production. In terms of food quality and safety the MENA has the top ranking country (Israel). The most frequent challenges faced by the countries of the MENA include public expenditure on agricultural research and development as well as volatility of agricultural production. Food security is a complex phenomenon that interacts with many other indicators of a country’s wellbeing; in the MENA it is slowly but markedly improving.
Keywords: Diet, food insecurity, global food security index, nutrition, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3995261 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence
Authors: K. N. Kiran, S. Anish
Abstract:
It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.
Keywords: Boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037260 Advanced Stochastic Models for Partially Developed Speckle
Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije
Abstract:
Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744259 Effect of Laser Power and Powder Flow Rate on Properties of Laser Metal Deposited Ti6Al4V
Authors: Mukul Shukla, Rasheedat M. Mahamood, Esther T. Akinlabi, Sisa. Pityana
Abstract:
Laser Metal Deposition (LMD) is an additive manufacturing process with capabilities that include: producing new part directly from 3 Dimensional Computer Aided Design (3D CAD) model, building new part on the existing old component and repairing an existing high valued component parts that would have been discarded in the past. With all these capabilities and its advantages over other additive manufacturing techniques, the underlying physics of the LMD process is yet to be fully understood probably because of high interaction between the processing parameters and studying many parameters at the same time makes it further complex to understand. In this study, the effect of laser power and powder flow rate on physical properties (deposition height and deposition width), metallurgical property (microstructure) and mechanical (microhardness) properties on laser deposited most widely used aerospace alloy are studied. Also, because the Ti6Al4V is very expensive, and LMD is capable of reducing buy-to-fly ratio of aerospace parts, the material utilization efficiency is also studied. Four sets of experiments were performed and repeated to establish repeatability using laser power of 1.8 kW and 3.0 kW, powder flow rate of 2.88 g/min and 5.67 g/min, and keeping the gas flow rate and scanning speed constant at 2 l/min and 0.005 m/s respectively. The deposition height / width are found to increase with increase in laser power and increase in powder flow rate. The material utilization is favoured by higher power while higher powder flow rate reduces material utilization. The results are presented and fully discussed.Keywords: Laser Metal Deposition, Material Efficiency, Microstructure, Ti6Al4V.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3629258 Autonomous Robots- Visual Perception in Underground Terrains Using Statistical Region Merging
Authors: Omowunmi E. Isafiade, Isaac O. Osunmakinde, Antoine B. Bagula
Abstract:
Robots- visual perception is a field that is gaining increasing attention from researchers. This is partly due to emerging trends in the commercial availability of 3D scanning systems or devices that produce a high information accuracy level for a variety of applications. In the history of mining, the mortality rate of mine workers has been alarming and robots exhibit a great deal of potentials to tackle safety issues in mines. However, an effective vision system is crucial to safe autonomous navigation in underground terrains. This work investigates robots- perception in underground terrains (mines and tunnels) using statistical region merging (SRM) model. SRM reconstructs the main structural components of an imagery by a simple but effective statistical analysis. An investigation is conducted on different regions of the mine, such as the shaft, stope and gallery, using publicly available mine frames, with a stream of locally captured mine images. An investigation is also conducted on a stream of underground tunnel image frames, using the XBOX Kinect 3D sensors. The Kinect sensors produce streams of red, green and blue (RGB) and depth images of 640 x 480 resolution at 30 frames per second. Integrating the depth information to drivability gives a strong cue to the analysis, which detects 3D results augmenting drivable and non-drivable regions in 2D. The results of the 2D and 3D experiment with different terrains, mines and tunnels, together with the qualitative and quantitative evaluation, reveal that a good drivable region can be detected in dynamic underground terrains.Keywords: Drivable Region Detection, Kinect Sensor, Robots' Perception, SRM, Underground Terrains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837257 Green Lean TQM Human Resource Management Practices in Malaysian Automotive Companies
Authors: Noor Azlina Mohd Salleh, Salmiah Kasolang, Ahmed Jaffar
Abstract:
Green Lean Total Quality Management (LTQM) Human Resource Management (HRM) System is a system comprises of HRM in Environmental Management System (EMS) practices which is integrated to TQM with Lean Manufacturing (LM) principles. HRM is essential especially in dealing with low motivation and less productive employees. The ultimate goal of this system is to focus on achieving total human resource development that is motivated and capable to optimize their creativity to be a part of Green and Lean TQM organization. A survey questionnaire was developed and distributed to 30 highly active automotive vendors in Malaysia and analyzed by Minitab v16 and SPSS v17. It was found out companies that are practicing Green LTQM HRM practices have generated more revenue and have RND capability. However, years of company establishment do not affect the openness of the company to adapt new initiatives that can help to improve the effectiveness of the operations. It was also found out the importance of training, communication and rewards for employees. The Green LTQM HRM practices framework model established in this study hopefully will give preliminary insight especially to companies that are still looking for system that can improve their productivity from managing human resource. This is preliminary study that combined 4 awards practices, ISO/TS16949, Toyota Production System SAEJ4000, MAJAICO Lean Production System and EMS focusing on highly active companies that have been involved in MAJAICO Program and Proton Vendor Development Program. Future study can be conducted to know the status at other industry as well as case study pertaining to this system.
Keywords: Automotive Industry, Lean Manufacturing, Operational Engineering Management, Total Quality Management. Environmental Management System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4186256 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises
Authors: Jiří F. Urbánek, David Král
Abstract:
Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.Keywords: Blazons, computational assistance, DYVELOP method, small and middle enterprises.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 703255 Automated Fact-Checking By Incorporating Contextual Knowledge and Multi-Faceted Search
Authors: Wenbo Wang, Yi-fang Brook Wu
Abstract:
The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state of the art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study presents a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive and authoritative data; 2) developing a search function to automatically select relevant, new and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that: 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graph in Wikidata to dynamically augment the representations of claims and references without introducing too much noises; II) exploring semantic relations in claims and references to further enhance fact-checking.
Keywords: Fact checking, claim verification, Deep Learning, Natural Language Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80254 Influence of the Seat Arrangement in Public Reading Spaces on Individual Subjective Perceptions
Authors: Jo-Han Chang, Chung-Jung Wu
Abstract:
This study involves a design proposal. The objective of is to create a seat arrangement model for public reading spaces that enable free arrangement without disturbing the users. Through a subjective perception scale, this study explored whether distance between seats and direction of seats influence individual subjective perceptions in a public reading space. This study also involves analysis of user subjective perceptions when reading in the settings on 3 seats at different directions and with 5 distances between seats. The results may be applied to public chair design. This study investigated that (a) whether different directions of seats and distances between seats influence individual subjective perceptions and (b) the acceptable personal space between 2 strangers in a public reading space. The results are shown as follows: (a) the directions of seats and distances between seats influenced individual subjective perceptions. (b) subjective evaluation scores were higher for back-to-back seat directions with Distances A (10cm) and B (62cm) compared with face-to-face and side-by-side seat directions; however, when the seat distance exceeded 114cm (Distance C), no difference existed among the directions of seats. (c) regarding reading in public spaces, when the distance between seats is 10cm only, we recommend arranging the seats in a back-to-back fashion to increase user comfort and arrangement of face-to-face and side- by-side seat directions should be avoided. When the seatarrangement is limited to face-to-face design, the distance between seats should be increased to at least 62cm. Moreover, the distance between seats should be increased to at least 114cm for side- by-side seats to elevate user comfort.
Keywords: Individual Subjective Perceptions, Personal Space, Seat Arrangement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923253 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based On Li-ion Battery and Solar Energy Supply
Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan
Abstract:
Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries.
In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.
Keywords: ZigBee, Li-ion battery, solar panel, CC2530.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3091252 The Study of Implications on Modern Businesses Performances by Digital Communities: Case of Data Leak
Authors: Asim Majeed, Anwar Ul Haq, Mike, Lloyd-Williams, Arshad Jamal, Usman Butt
Abstract:
This study aims to investigate the impact of data leak of M&S customers on digital communities. Modern businesses are using digital communities as an important public relations tool for marketing purposes. This form of communication helps companies to build better relationship with their customers which also act as another source of information. The communication between the customers and the organizations is not regulated so users may post positive and negative comments. There are new platforms being developed on a daily basis and it is very crucial for the businesses to not only get themselves familiar with those but also know how to reach their existing and perspective consumers. The driving force of marketing and communication in modern businesses is the digital communities and these are continuously increasing and developing. This phenomenon is changing the way marketing is conducted. The current research has discussed the implications on M&S business performance since the data was exploited on digital communities; users contacted M&S and raised the security concerns. M&S closed down its website for few hours to try to resolve the issue. The next day M&S made a public apology about this incidence. This information was proliferated on various digital communities and it has impacted negatively on M&S brand name, sales and customers. The content analysis approach is being used to collect qualitative data from 100 digital bloggers including social media communities such as Facebook and Twitter. The results and finding provide useful new insights into the nature and form of security concerns of digital users. Findings have theoretical and practical implications. This research will showcase a large corporation utilizing various digital community platforms and can serve as a model for future organizations.
Keywords: Digital, communities, performance, dissemination, implications, data, exploitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817251 Analysis of Vortex-Induced Vibration Characteristics for a Three-Dimensional Flexible Tube
Authors: Zhipeng Feng, Huanhuan Qi, Pingchuan Shen, Fenggang Zang, Yixiong Zhang
Abstract:
Numerical simulations of vortex-induced vibration of a three-dimensional flexible tube under uniform turbulent flow are calculated when Reynolds number is 1.35×104. In order to achieve the vortex-induced vibration, the three-dimensional unsteady, viscous, incompressible Navier-Stokes equation and LES turbulence model are solved with the finite volume approach, the tube is discretized according to the finite element theory, and its dynamic equilibrium equations are solved by the Newmark method. The fluid-tube interaction is realized by utilizing the diffusion-based smooth dynamic mesh method. Considering the vortex-induced vibration system, the variety trends of lift coefficient, drag coefficient, displacement, vertex shedding frequency, phase difference angle of tube are analyzed under different frequency ratios. The nonlinear phenomena of locked-in, phase-switch are captured successfully. Meanwhile, the limit cycle and bifurcation of lift coefficient and displacement are analyzed by using trajectory, phase portrait, and Poincaré sections. The results reveal that: when drag coefficient reaches its minimum value, the transverse amplitude reaches its maximum, and the “lock-in” begins simultaneously. In the range of lock-in, amplitude decreases gradually with increasing of frequency ratio. When lift coefficient reaches its minimum value, the phase difference undergoes a suddenly change from the “out-of-phase” to the “in-phase” mode.
Keywords: Vortex induced vibration, limit cycle, CFD, FEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469250 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994249 Conciliation Bodies as an Effective Tool for the Enforcement of Air Passenger Rights: Examination of an Exemplary Model in Germany
Authors: C. Hipp
Abstract:
The EU Regulation (EC) No 261/2004 under which air passengers can claim compensation in the event of denied boarding, cancellation or long delay of flights has to be regarded as a substantial progress for the consumer protection in the field of air transport since it went into force in February 2005. Nevertheless, different reviews of its effective functioning demonstrate that most passengers affected by service disruptions do not enforce their complaints and claims towards the airline. The main cause of this is not only the unclear legal situation due to the fact that the regulation itself suffers from many undetermined terms and loopholes it is also attributable to the strategy of the airlines which do not handle the complaints of the passengers or exclude their duty to compensate them. Economically contemplated, reasons like the long duration of a trial and the cost risk in relation to the amount of compensation make it comprehensible that passengers are deterred from enforcing their rights by filing a lawsuit. The paper focusses on the alternative dispute resolution namely the recently established conciliation bodies which deal with air passenger rights. In this paper, the Conciliation Body for Public Transport in Germany (Schlichtungsstelle für den öffentlichen Personenverkehr – SÖP) is examined as a successful example of independent consumer arbitration service. It was founded in 2009 and deals with complaints in the field of air passenger rights since November 2013. According to the current situation one has to admit that due to its structure and operation it meets on the one hand the needs of the airlines by giving them an efficient tool of their customer relation management and on the other hand that it contributes to the enforcement of air passenger rights effectively.
Keywords: Air passenger rights, alternative dispute resolution (ADR), consumer protection, EU law regulation (EC) No 261/2004.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1149248 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel
Abstract:
In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655247 Eco-Roof Systems in Subtropical Climates for Sustainable Development and Mitigation of Climate Change
Authors: M. O’Driscoll, M. Anwar, M. G. Rasul
Abstract:
The benefits of eco-roofs is quite well known, however there remains very little research conducted for the implementation of eco-roofs in subtropical climates such as Australia. There are many challenges facing Australia as it moves into the future, climate change is proving to be one of the leading challenges. In order to move forward with the mitigation of climate change, the impacts of rapid urbanization need to be offset. Eco-roofs are one way to achieve this; this study presents the energy savings and environmental benefits of the implementation of eco-roofs in subtropical climates. An experimental set-up was installed at Rockhampton campus of Central Queensland University, where two shipping containers were converted into small offices, one with an eco-roof and one without. These were used for temperature, humidity and energy consumption data collection. In addition, a computational model was developed using Design Builder software (state-of-the-art building energy simulation software) for simulating energy consumption of shipping containers and environmental parameters, this was done to allow comparison between simulated and real world data. This study found that eco-roofs are very effective in subtropical climates and provide energy saving of about 13% which agrees well with simulated results.
Keywords: Climate Change, Eco/Green roof, Energy savings, Subtropical climate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241246 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions
Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang
Abstract:
Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.
Keywords: Computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 774245 Fuzzy Control of Thermally Isolated Greenhouse Building by Utilizing Underground Heat Exchanger and Outside Weather Conditions
Authors: Raghad Alhusari, Farag Omar, Moustafa Fadel
Abstract:
A traditional greenhouse is a metal frame agricultural building used for cultivation plants in a controlled environment isolated from external climatic changes. Using greenhouses in agriculture is an efficient way to reduce the water consumption, where agriculture field is considered the biggest water consumer world widely. Controlling greenhouse environment yields better productivity of plants but demands an increase of electric power. Although various control approaches have been used towards greenhouse automation, most of them are applied to traditional greenhouses with ventilation fans and/or evaporation cooling system. Such approaches are still demanding high energy and water consumption. The aim of this research is to develop a fuzzy control system that minimizes water and energy consumption by utilizing outside weather conditions and underground heat exchanger to maintain the optimum climate of the greenhouse. The proposed control system is implemented on an experimental model of thermally isolated greenhouse structure with dimensions of 6x5x2.8 meters. It uses fans for extracting heat from the ground heat exchanger system, motors for automatic open/close of the greenhouse windows and LED as lighting system. The controller is integrated also with environmental condition sensors. It was found that using the air-to-air horizontal ground heat exchanger with 90 mm diameter and 2 mm thickness placed 2.5 m below the ground surface results in decreasing the greenhouse temperature of 3.28 ˚C which saves around 3 kW of consumed energy. It also eliminated the water consumption needed in evaporation cooling systems which are traditionally used for cooling the greenhouse environment.Keywords: Automation, earth-to-air heat exchangers, fuzzy control, greenhouse, sustainable buildings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696244 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.
Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408243 Harrison’s Stolen: Addressing Aboriginal and Indigenous Islanders Human Rights
Authors: M. Shukry
Abstract:
According to the United Nations Declaration of Human Rights in 1948, every human being is entitled to rights in life that should be respected by others and protected by the state and community. Such rights are inherent regardless of colour, ethnicity, gender, religion or otherwise, and it is expected that all humans alike have the right to live without discrimination of any sort. However, that has not been the case with Aborigines in Australia. Over a long period of time, the governments of the State and the Territories and the Australian Commonwealth denied the Aboriginal and Indigenous inhabitants of the Torres Strait Islands such rights. Past Australian governments set policies and laws that enabled them to forcefully remove Indigenous children from their parents, which resulted in creating lost generations living the trauma of the loss of cultural identity, alienation and even their own selfhood. Intending to reduce that population of natives and their Aboriginal culture while, on the other hand, assimilate them into mainstream society, they gave themselves the right to remove them from their families with no hope of return. That practice has led to tragic consequences due to the trauma that has affected those children, an experience that is depicted by Jane Harrison in her play Stolen. The drama is the outcome of a six-year project on lost children and which was first performed in 1997 in Melbourne. Five actors only appear on the stage, playing the role of all the different characters, whether the main protagonists or the remaining cast, present or non-present ones as voices. The play outlines the life of five children who have been taken from their parents at an early age, entailing a disastrous negative impact that differs from one to the other. Unknown to each other, what connects between them is being put in a children’s home. The purpose of this paper is to analyse the play’s text in light of the 1948 Declaration of Human Rights, using it as a lens that reflects the atrocities practiced against the Aborigines. It highlights how such practices formed an outrageous violation of those natives’ rights as human beings. Harrison’s dramatic technique in conveying the children’s experiences is through a non-linear structure, fluctuating between past and present that are linked together within each of the five characters, reflecting their suffering and pain to create an emotional link between them and the audience. Her dramatic handling of the issue by fusing tragedy with humour as well as symbolism is a successful technique in revealing the traumatic memory of those children and their present life. The play has made a difference in commencing to address the problem of the right of all children to be with their families, which renders the real meaning of having a home and an identity as people.
Keywords: Aboriginal, audience, Australia, children, culture, drama, home, human rights, identity, indigenous, Jane Harrison, memory, scenic effects, setting, stage, stage directions, Stolen, trauma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682242 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver and Pancreatic Grafts
Authors: C. S. Mammas, A. Lazaris, A. S. Mamma-Graham, G. Kostopanagiotou, C. Lemonidou, J. Mantas, E. Patsouris
Abstract:
Introduction: The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the remote evaluation of the grafts. It has been estimated that even in well-organized transplant systems an average of 8% to 14% of the grafts (G) that arrive at the recipient hospitals may be considered as diseased, injured, damaged or improper for transplantation. Digital microscopy adds information on a microscopic level about the grafts in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G, will arrive at the recipient hospital for implantation. Aim: Ergonomics of Digital Microscopy (DM) based on virtual slides, on Telemedicine Systems (TS) for Tele-Pathological (TPE) evaluation of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of Renal Graft (RG), Liver Graft (LG) and Pancreatic Graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying Virtual Slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included: a. Development of an OTE-TS similar Experimental Telemedicine System (Exp.-TS), b. Simulation of the integration of TS with the VS based microscopic TPE of RG, LG and PG applying DM. Simulation of the DM based TPE was performed by 2 specialists on a total of 238 human Renal Graft (RG), 172 Liver Graft (LG) and 108 Pancreatic Graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to diagnose accurately the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a Desktop, followed by the ES of the applied Exp.-TS. Tablet and Mobile-Phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and aware ness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval; seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.Keywords: Organ Transplantation, Tele-Pathology, Digital Microscopy, Virtual Slides.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898241 Measurements of MRI R2* Relaxation Rate in Liver and Muscle: Animal Model
Authors: Chiung-Yun Chang, Po-Chou Chen, Jiun-Shiang Tzeng, Ka-Wai Mac, Chia-Chi Hsiao, Jo-Chi Jao
Abstract:
This study was aimed to measure effective transverse relaxation rates (R2*) in the liver and muscle of normal New Zealand White (NZW) rabbits. R2* relaxation rate has been widely used in various hepatic diseases for iron overload by quantifying iron contents in liver. R2* relaxation rate is defined as the reciprocal of T2* relaxation time and mainly depends on the constituents of tissue. Different tissues would have different R2* relaxation rates. The signal intensity decay in Magnetic resonance imaging (MRI) may be characterized by R2* relaxation rates. In this study, a 1.5T GE Signa HDxt whole body MR scanner equipped with an 8-channel high resolution knee coil was used to observe R2* values in NZW rabbit’s liver and muscle. Eight healthy NZW rabbits weighted 2 ~ 2.5 kg were recruited. After anesthesia using Zoletil 50 and Rompun 2% mixture, the abdomen of rabbit was landmarked at the center of knee coil to perform 3-plane localizer scan using fast spoiled gradient echo (FSPGR) pulse sequence. Afterwards, multi-planar fast gradient echo (MFGR) scans were performed with 8 various echo times (TEs) to acquire images for R2* measurements. Regions of interest (ROIs) at liver and muscle were measured using Advantage workstation. Finally, the R2* was obtained by a linear regression of ln(sı) on TE. The results showed that the longer the echo time, the smaller the signal intensity. The R2* values of liver and muscle were 44.8 ± 10.9 s-1 and 37.4 ± 9.5 s-1, respectively. It implies that the iron concentration of liver is higher than that of muscle. In conclusion, the more the iron contents in tissue, the higher the R2*. The correlations between R2* and iron content in NZW rabbits might be valuable for further exploration.Keywords: Liver, MRI, multi-planar fast gradient echo, muscle, R2* relaxation rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2150