Search results for: Material efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4239

Search results for: Material efficiency

189 Increasing Power Transfer Capacity of Distribution Networks Using Direct Current Feeders

Authors: Akim Borbuev, Francisco de León

Abstract:

Economic and population growth in densely-populated urban areas introduce major challenges to distribution system operators, planers, and designers. To supply added loads, utilities are frequently forced to invest in new distribution feeders. However, this is becoming increasingly more challenging due to space limitations and rising installation costs in urban settings. This paper proposes the conversion of critical alternating current (ac) distribution feeders into direct current (dc) feeders to increase the power transfer capacity by a factor as high as four. Current trends suggest that the return of dc transmission, distribution, and utilization are inevitable. Since a total system-level transformation to dc operation is not possible in a short period of time due to the needed huge investments and utility unreadiness, this paper recommends that feeders that are expected to exceed their limits in near future are converted to dc. The increase in power transfer capacity is achieved through several key differences between ac and dc power transmission systems. First, it is shown that underground cables can be operated at higher dc voltage than the ac voltage for the same dielectric stress in the insulation. Second, cable sheath losses, due to induced voltages yielding circulation currents, that can be as high as phase conductor losses under ac operation, are not present under dc. Finally, skin and proximity effects in conductors and sheaths do not exist in dc cables. The paper demonstrates that in addition to the increased power transfer capacity utilities substituting ac feeders by dc feeders could benefit from significant lower costs and reduced losses. Installing dc feeders is less expensive than installing new ac feeders even when new trenches are not needed. Case studies using the IEEE 342-Node Low Voltage Networked Test System quantify the technical and economic benefits of dc feeders.

Keywords: Dc power systems, distribution feeders, distribution networks, energy efficiency, power transfer capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066
188 Data Projects for “Social Good”: Challenges and Opportunities

Authors: Mikel Niño, Roberto V. Zicari, Todor Ivanov, Kim Hee, Naveed Mushtaq, Marten Rosselli, Concha Sánchez-Ocaña, Karsten Tolle, José Miguel Blanco, Arantza Illarramendi, Jörg Besier, Harry Underwood

Abstract:

One of the application fields for data analysis techniques and technologies gaining momentum is the area of social good or “common good”, covering cases related to humanitarian crises, global health care, or ecology and environmental issues, among others. The promotion of data-driven projects in this field aims at increasing the efficacy and efficiency of social initiatives, improving the way these actions help humanity in general and people in need in particular. This application field, however, poses its own barriers and challenges when developing data-driven projects, lagging behind in comparison with other scenarios. These challenges derive from aspects such as the scope and scale of the social issue to solve, cultural and political barriers, the skills of main stakeholders and the technological resources available, the motivation to be engaged in such projects, or the ethical and legal issues related to sensitive data. This paper analyzes the application of data projects in the field of social good, reviewing its current state and noteworthy initiatives, and presenting a framework covering the key aspects to analyze in such projects. The goal is to provide guidelines to understand the main challenges and opportunities for this type of data project, as well as identifying the main differential issues compared to “classical” data projects in general. A case study is presented on the initial steps and stakeholder analysis of a data project for the inclusion of refugees in the city of Frankfurt, Germany, in order to empirically confront the framework with a real example.

Keywords: Data-Driven projects, humanitarian operations, personal and sensitive data, social good, stakeholders analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1775
187 Exploring Management of the Fuzzy Front End of Innovation in a Product Driven Startup Company

Authors: Dmitry K. Shaytan, Georgy D. Laptev

Abstract:

In our research we aimed to test a managerial approach for the fuzzy front end (FFE) of innovation by creating controlled experiment/ business case in a breakthrough innovation development. The experiment was in the sport industry and covered all aspects of the customer discovery stage from ideation to prototyping followed by patent application. In the paper we describe and analyze mile stones, tasks, management challenges, decisions made to create the break through innovation, evaluate overall managerial efficiency that was at the considered FFE stage. We set managerial outcome of the FFE stage as a valid product concept in hand. In our paper we introduce hypothetical construct “Q-factor” that helps us in the experiment to distinguish quality of FFE outcomes. The experiment simulated for entrepreneur the FFE of innovation and put on his shoulders responsibility for the outcome of valid product concept. While developing managerial approach to reach the outcome there was a decision to look on product concept from the cognitive psychology and cognitive science point of view. This view helped us to develop the profile of a person whose projection (mental representation) of a new product could optimize for a manager or entrepreneur FFE activities. In the experiment this profile was tested to develop breakthrough innovation for swimmers. Following the managerial approach the product concept was created to help swimmers to feel/sense water. The working prototype was developed to estimate the product concept validity and value added effect for customers. Based on feedback from coachers and swimmers there were strong positive effect that gave high value for customers, and for the experiment – the valid product concept being developed by proposed managerial approach for the FFE. In conclusions there is a suggestion of managerial approach that was derived from experiment.

Keywords: Concept development, concept testing, customer discovery, entrepreneurship, entrepreneurial management, idea generation, idea screening, startup management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
186 Homogenization of Cocoa Beans Fermentation to Upgrade Quality Using an Original Improved Fermenter

Authors: Aka S. Koffi, N’Goran Yao, Philippe Bastide, Denis Bruneau, Diby Kadjo

Abstract:

Cocoa beans (Theobroma cocoa L.) are the main components for chocolate manufacturing. The beans must be correctly fermented at first. Traditional process to perform the first fermentation (lactic fermentation) often consists in confining cacao beans using banana leaves or a fermentation basket, both of them leading to a poor product thermal insulation and to an inability to mix the product. Box fermenter reduces this loss by using a wood with large thickness (e>3cm), but mixing to homogenize the product is still hard to perform. Automatic fermenters are not rentable for most of producers. Heat (T>45°C) and acidity produced during the fermentation by microbiology activity of yeasts and bacteria are enabling the emergence of potential flavor and taste of future chocolate. In this study, a cylindro-rotative fermenter (FCR-V1) has been built and coconut fibers were used in its structure to confine heat. An axis of rotation (360°) has been integrated to facilitate the turning and homogenization of beans in the fermenter. This axis permits to put fermenter in a vertical position during the anaerobic alcoholic phase of fermentation, and horizontally during acetic phase to take advantage of the mid height filling. For circulation of air flow during turning in acetic phase, two woven rattan with grid have been made, one for the top and second for the bottom of the fermenter. In order to reduce air flow during acetic phase, two airtight covers are put on each grid cover. The efficiency of the turning by this kind of rotation, coupled with homogenization of the temperature, caused by the horizontal position in the acetic phase of the fermenter, contribute to having a good proportion of well-fermented beans (83.23%). In addition, beans’pH values ranged between 4.5 and 5.5. These values are ideal for enzymatic activity in the production of the aromatic compounds inside beans. The regularity of mass loss during all fermentation makes it possible to predict the drying surface corresponding to the amount being fermented.

Keywords: Cocoa fermentation, fermenter, microbial activity, temperature, turning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
185 Cluster Based Energy Efficient and Fault Tolerant n-Coverage in Wireless Sensor Network

Authors: D. Satish Kumar, N. Nagarajan

Abstract:

Coverage conservation and extend the network lifetime are the primary issues in wireless sensor networks. Due to the large variety of applications, coverage is focus to a wide range of interpretations. The applications necessitate that each point in the area is observed by only one sensor while other applications may require that each point is enclosed by at least sensors (n>1) to achieve fault tolerance. Sensor scheduling activities in existing Transparent and non- Transparent relay modes (T-NT) Mobile Multi-Hop relay networks fails to guarantee area coverage with minimal energy consumption and fault tolerance. To overcome these issues, Cluster based Energy Competent n- coverage scheme called (CEC n-coverage scheme) to ensure the full coverage of a monitored area while saving energy. CEC n-coverage scheme uses a novel sensor scheduling scheme based on the n-density and the remaining energy of each sensor to determine the state of all the deployed sensors to be either active or sleep as well as the state durations. Hence, it is attractive to trigger a minimum number of sensors that are able to ensure coverage area and turn off some redundant sensors to save energy and therefore extend network lifetime. In addition, decisive a smallest amount of active sensors based on the degree coverage required and its level. A variety of numerical parameters are computed using ns2 simulator on existing (T-NT) Mobile Multi-Hop relay networks and CEC n-coverage scheme. Simulation results showed that CEC n-coverage scheme in wireless sensor network provides better performance in terms of the energy efficiency, 6.61% reduced fault tolerant in terms of seconds and the percentage of active sensors to guarantee the area coverage compared to exiting algorithm.

Keywords: Wireless Sensor network, Mobile Multi-Hop relay networks, n-coverage, Cluster based Energy Competent, Transparent and non- Transparent relay modes, Fault Tolerant, sensor scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2139
184 Tribological Aspects of Advanced Roll Material in Cold Rolling of Stainless Steel

Authors: Mohammed Tahir, Jonas Lagergren

Abstract:

Vancron 40, a nitrided powder metallurgical tool Steel, is used in cold work applications where the predominant failure mechanisms are adhesive wear or galling. Typical applications of Vancron 40 are among others fine blanking, cold extrusion, deep drawing and cold work rolls for cluster mills. Vancron 40 positive results for cold work rolls for cluster mills and as a tool for some severe metal forming process makes it competitive compared to other type of work rolls that require higher precision, among others in cold rolling of thin stainless steel, which required high surface finish quality. In this project, three roll materials for cold rolling of stainless steel strip was examined, Vancron 40, Narva 12B (a high-carbon, high-chromium tool steel alloyed with tungsten) and Supra 3 (a Chromium-molybdenum tungsten-vanadium alloyed high speed steel). The purpose of this project was to study the depth profiles of the ironed stainless steel strips, emergence of galling and to study the lubrication performance used by steel industries. Laboratory experiments were conducted to examine scratch of the strip, galling and surface roughness of the roll materials under severe tribological conditions. The critical sliding length for onset of galling was estimated for stainless steel with four different lubricants. Laboratory experiments result of performance evaluation of resistance capability of rolls toward adhesive wear under severe conditions for low and high reductions. Vancron 40 in combination with cold rolling lubricant gave good surface quality, prevents galling of metal surfaces and good bearing capacity.

Keywords: Adhesive wear, Cold rolling, Lubricant, Stainless steel, Surface finish, Vancron 40.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2755
183 The Guideline of Overall Competitive Advantage Promotion with Key Success Paths

Authors: M. F. Wu, F. T. Cheng, C. S. Wu, M. C. Tan

Abstract:

It is a critical time to upgrade technology and increase value added with manufacturing skills developing and management strategies that will highly satisfy the customers need in the precision machinery global market. In recent years, the supply side, each precision machinery manufacturers in each country are facing the pressures of price reducing from the demand side voices that pushes the high-end precision machinery manufacturers adopts low-cost and high-quality strategy to retrieve the market. Because of the trend of the global market, the manufacturers must take price reducing strategies and upgrade technology of low-end machinery for differentiations to consolidate the market.By using six key success factors (KSFs), customer perceived value, customer satisfaction, customer service, product design, product effectiveness and machine structure quality are causal conditions to explore the impact of competitive advantage of the enterprise, such as overall profitability and product pricing power. This research uses key success paths (KSPs) approach and f/s QCA software to explore various combinations of causal relationships, so as to fully understand the performance level of KSFs and business objectives in order to achieve competitive advantage. In this study, the combination of a causal relationships, are called Key Success Paths (KSPs). The key success paths guide the enterprise to achieve the specific outcomes of business. The findings of this study indicate that there are thirteen KSPs to achieve the overall profitability, sixteen KSPs to achieve the product pricing power and seventeen KSPs to achieve both overall profitability and pricing power of the enterprise. The KSPs provide the directions of resources integration and allocation, improve utilization efficiency of limited resources to realize the continuous vision of the enterprise.

Keywords: Precision Machinery Industry, Key Success Factors (KSPs), Key Success Paths (KSPs), Overall Profitability, Product Pricing Power, Competitive Advantages.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
182 The Study of Tourists’ Behavior in Water Usage in Hotel Business: Case Study of Phuket Province, Thailand

Authors: A. Pensiri, K. Nantaporn, P. Parichut

Abstract:

Tourism is very important to the economy of many countries due to the large contribution in the areas of employment and income generation. However, the rapid growth of tourism can also be considered as one of the major uses of water user, and therefore also have a significant and detrimental impact on the environment. Guest behavior in water usage can be used to manage water in hotels for sustainable water resources management. This research presents a study of hotel guest water usage behavior at two hotels, namely Hotel A (located in Kathu district) and Hotel B (located in Muang district) in Phuket Province, Thailand, as case studies. Primary and secondary data were collected from the hotel manager through interview and questionnaires. The water flow rate was measured in-situ from each water supply device in the standard room type at each hotel, including hand washing faucets, bathroom faucets, shower and toilet flush. For the interview, the majority of respondents (n = 204 for Hotel A and n = 244 for Hotel B) were aged between 21 years and 30 years (53% for Hotel A and 65% for Hotel B) and the majority were foreign (78% in Hotel A, and 92% in Hotel B) from American, France and Austria for purposes of tourism (63% in Hotel A, and 55% in Hotel B). The data showed that water consumption ranged from 188 litres to 507 liters, and 383 litres to 415 litres per overnight guest in Hotel A and Hotel B (n = 244), respectively. These figures exceed the water efficiency benchmark set for Tropical regions by the International Tourism Partnership (ITP). It is recommended that guest water saving initiatives should be implemented at hotels. Moreover, the results showed that guests have high satisfaction for the hotels, the front office service reveal the top rates of average score of 4.35 in Hotel A and 4.20 in Hotel B, respectively, while the luxury decoration and room cleanliness exhibited the second satisfaction scored by the guests in Hotel A and B, respectively. On the basis of this information, the findings can be very useful to improve customer service satisfaction and pay attention to this particular aspect for better hotel management.

Keywords: Hotel, tourism, Phuket, water usage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2290
181 Challenges of Irrigation Water Supply in Croplands of Arid Regions and their Environmental Consequences – A Case Study in the Dez and Moghan Command Areas of Iran

Authors: Lobat Taghavi, Najaf Hedayat

Abstract:

Renewable water resources are crucial production variables in arid and semi-arid regions where intensive agriculture is practiced to meet ever-increasing demand for food and fiber. This is crucial for the Dez and Moghan command areas where water delivery problems and adverse environmental issues are widespread. This paper aims to identify major problems areas using on-farm surveys of 200 farmers, agricultural extensionists and water suppliers which was complemented by secondary data and field observations during 2010- 2011 cultivating season. The SPSS package was used to analyze and synthesis data. Results indicated inappropriate canal operations in both schemes, though there was no unanimity about the underlying causes. Inequitable and inflexible distribution was found to be rooted in deficient hydraulic structures particularly in the main and secondary canals. The inadequacy and inflexibility of water scheduling regime was the underlying causes of recurring pest and disease spread which often led to the decline of crop yield and quality, although these were not disputed, the water suppliers were not prepared to link with the deficiencies in the operation of the main and secondary canals. They rather attributed these to the prevailing salinity; alkalinity, water table fluctuations and leaching of the valuable agro-chemical inputs from the plants- route zone with farreaching consequences. Examples of these include the pollution of ground and surface resources due to over-irrigation at the farm level which falls under the growers- own responsibility. Poor irrigation efficiency and adverse environmental problems were attributed to deficient and outdated farming practices that were in turn rooted in poor extension programs and irrational water charges.

Keywords: water delivery, inequity, inflexibility, conflicts, environmental impact, Dez and Moghan

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
180 Optimizing Organizational Performance: The Critical Role of Headcount Budgeting in Strategic Alignment and Financial Stability

Authors: Shobhit Mittal

Abstract:

Headcount budgeting stands as a pivotal element in organizational financial management, extending beyond traditional budgeting to encompass strategic resource allocation for workforce-related expenses. This process is integral to maintaining financial stability and fostering a productive workforce, requiring a comprehensive analysis of factors such as market trends, business growth projections, and evolving workforce skill requirements. It demands a collaborative approach, primarily involving Human Resources (HR) and finance departments, to align workforce planning with an organization's financial capabilities and strategic objectives. The dynamic nature of headcount budgeting necessitates continuous monitoring and adjustment in response to economic fluctuations, business strategy shifts, technological advancements, and market dynamics. Its significance in talent management is also highlighted, aligning financial planning with talent acquisition and retention strategies to ensure a competitive edge in the market. The consequences of incorrect headcount budgeting are explored, showing how it can lead to financial strain, operational inefficiencies, and hindered strategic objectives. Examining case studies like IBM's strategic workforce rebalancing and Microsoft's shift for long-term success, the importance of aligning headcount budgeting with organizational goals is underscored. These examples illustrate that effective headcount budgeting transcends its role as a financial tool, emerging as a strategic element crucial for an organization's success. This necessitates continuous refinement and adaptation to align with evolving business goals and market conditions, highlighting its role as a key driver in organizational success and sustainability.

Keywords: Strategic planning, fiscal budget, headcount planning, resource allocation, financial management, decision-making, operational efficiency, risk management, headcount budget.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 129
179 Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm

Authors: A. El Harraj, N. Raissouni

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: Video surveillance, background subtraction, Contrast Limited Histogram Equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2081
178 GridNtru: High Performance PKCS

Authors: Narasimham Challa, Jayaram Pradhan

Abstract:

Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.

Keywords: Alchemi, GridNtru, Ntru, PKCS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
177 Similitude for Thermal Scale-up of a Multiphase Thermolysis Reactor in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

The thermochemical copper-chlorine (Cu-Cl) cycle is considered as a sustainable and efficient technology for a hydrogen production, when linked with clean-energy systems such as nuclear reactors or solar thermal plants. In the Cu-Cl cycle, water is decomposed thermally into hydrogen and oxygen through a series of intermediate reactions. This paper investigates the thermal scale up analysis of the three phase oxygen production reactor in the Cu-Cl cycle, where the reaction is endothermic and the temperature is about 530 oC. The paper focuses on examining the size and number of oxygen reactors required to provide enough heat input for different rates of hydrogen production. The type of the multiphase reactor used in this paper is the continuous stirred tank reactor (CSTR) that is heated by a half pipe jacket. The thermal resistance of each section in the jacketed reactor system is studied to examine its effect on the heat balance of the reactor. It is found that the dominant contribution to the system thermal resistance is from the reactor wall. In the analysis, the Cu-Cl cycle is assumed to be driven by a nuclear reactor where two types of nuclear reactors are examined as the heat source to the oxygen reactor. These types are the CANDU Super Critical Water Reactor (CANDU-SCWR) and High Temperature Gas Reactor (HTGR). It is concluded that a better heat transfer rate has to be provided for CANDU-SCWR by 3-4 times than HTGR. The effect of the reactor aspect ratio is also examined in this paper and is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Finally, a comparison between the results of heat balance and existing results of mass balance is performed and is found that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: Clean energy, Cu-Cl cycle, heat transfer, sustainable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
176 Modeling Decentralized Source-Separation Systems for Urban Waste Management

Authors: Bernard J.H. Ng, Apostolos Giannis, Victor Chang, Rainer Stegmann, Jing-Yuan Wang

Abstract:

Decentralized eco-sanitation system is a promising and sustainable mode comparing to the century-old centralized conventional sanitation system. The decentralized concept relies on an environmentally and economically sound management of water, nutrient and energy fluxes. Source-separation systems for urban waste management collect different solid waste and wastewater streams separately to facilitate the recovery of valuable resources from wastewater (energy, nutrients). A resource recovery centre constituted for 20,000 people will act as the functional unit for the treatment of urban waste of a high-density population community, like Singapore. The decentralized system includes urine treatment, faeces and food waste co-digestion, and horticultural waste and organic fraction of municipal solid waste treatment in composting plants. A design model is developed to estimate the input and output in terms of materials and energy. The inputs of urine (yellow water, YW) and faeces (brown water, BW) are calculated by considering the daily mean production of urine and faeces by humans and the water consumption of no-mix vacuum toilet (0.2 and 1 L flushing water for urine and faeces, respectively). The food waste (FW) production is estimated to be 150 g wet weight/person/day. The YW is collected and discharged by gravity into tank. It was found that two days are required for urine hydrolysis and struvite precipitation. The maximum nitrogen (N) and phosphorus (P) recovery are 150-266 kg/day and 20-70 kg/day, respectively. In contrast, BW and FW are mixed for co-digestion in a thermophilic acidification tank and later a decentralized/centralized methanogenic reactor is used for biogas production. It is determined that 6.16-15.67 m3/h methane is produced which is equivalent to 0.07-0.19 kWh/ca/day. The digestion residues are treated with horticultural waste and organic fraction of municipal waste in co-composting plants.

Keywords: Decentralization, ecological sanitation, material flow analysis, source-separation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2920
175 A Preliminary X-Ray Study on Human-Hair Microstructures for a Health-State Indicator

Authors: Phannee Saengkaew, Weerasak Ussawawongaraya, Sasiphan Khaweerat, Supagorn Rugmai, Sirisart Ouajai, Jiraporn Luengviriya, Sakuntam Sanorpim, Manop Tirarattanasompot, Somboon Rhianphumikarakit

Abstract:

We present a preliminary x-ray study on human-hair microstructures for a health-state indicator, in particular a cancer case. As an uncomplicated and low-cost method of x-ray technique, the human-hair microstructure was analyzed by wide-angle x-ray diffractions (XRD) and small-angle x-ray scattering (SAXS). The XRD measurements exhibited the simply reflections at the d-spacing of 28 Å, 9.4 Å and 4.4 Å representing to the periodic distance of the protein matrix of the human-hair macrofibrous and the diameter and the repeated spacing of the polypeptide alpha helixes of the photofibrils of the human-hair microfibrous, respectively. When compared to the normal cases, the unhealthy cases including to the breast- and ovarian-cancer cases obtained higher normalized ratios of the x-ray diffracting peaks of 9.4 Å and 4.4 Å. This likely resulted from the varied distributions of microstructures by a molecular alteration. As an elemental analysis by x-ray fluorescence (XRF), the normalized quantitative ratios of zinc(Zn)/calcium(Ca) and iron(Fe)/calcium(Ca) were determined. Analogously, both Zn/Ca and Fe/Ca ratios of the unhealthy cases were obtained higher than both of the normal cases were. Combining the structural analysis by XRD measurements and the elemental analysis by XRF measurements exhibited that the modified fibrous microstructures of hair samples were in relation to their altered elemental compositions. Therefore, these microstructural and elemental analyses of hair samples will be benefit to associate with a diagnosis of cancer and genetic diseases. This functional method would lower a risk of such diseases by the early diagnosis. However, the high-intensity x-ray source, the highresolution x-ray detector, and more hair samples are necessarily desired to develop this x-ray technique and the efficiency would be enhanced by including the skin and fingernail samples with the human-hair analysis.

Keywords: Human-hair analysis, XRD, SAXS, breast cancer, health-state indicator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2568
174 High Quality Colored Wind Chimes by Anodization on Aluminum Alloy

Authors: Chia-Chih Wei, Yun-Qi Li, Ssu-Ying Chen, Hsuan-Jung Chen, Hsi-Wen Yang, Chih-Yuan Chen, Chien-Chon Chen

Abstract:

In this paper, we used a high-quality anodization technique to make a colored wind chime with a nano-tube structure anodic film, which controls the length-to-diameter ratio of an aluminum rod and controls the oxide film structure on the surface of the aluminum rod by an anodizing method. The research experiment used hard anodization to grow a controllable thickness of anodic film on an aluminum alloy surface. The hard anodization film has high hardness, high insulation, high-temperature resistance, good corrosion resistance, colors, and mass production properties that can be further applied to transportation, electronic products, biomedical fields, or energy industry applications. This study also provides in-depth research and a detailed discussion of the related process of aluminum alloy surface hard anodizing, including pre-anodization, anodization, and post-anodization. The experiment parameters of anodization include using a mixed acid solution of sulfuric acid and oxalic acid as an anodization electrolyte and controlling the temperature, time, current density, and final voltage to obtain the anodic film. In the results of the experiments, the properties of the anodic film, including thickness, hardness, insulation, and corrosion characteristics, the microstructure of the anode film were measured, and the hard anodization efficiency was calculated. Thereby it can obtain different transmission speeds of sound in the aluminum rod. And, different audio sounds can present on the aluminum rod. Another feature of the present experiment result is the use of the anodizing method and dyeing method, laser engraving patterning and electrophoresis method to make good-quality colored aluminum wind chimes.

Keywords: Anodization, aluminum, wind chime, nano-tube.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 70
173 Single Ion Transport with a Single-Layer Graphene Nanopore

Authors: Vishal V. R. Nandigana, Mohammad Heiranian, Narayana R. Aluru

Abstract:

Graphene material has found tremendous applications in water desalination, DNA sequencing and energy storage. Multiple nanopores are etched to create opening for water desalination and energy storage applications. The nanopores created are of the order of 3-5 nm allowing multiple ions to transport through the pore. In this paper, we present for the first time, molecular dynamics study of single ion transport, where only one ion passes through the graphene nanopore. The diameter of the graphene nanopore is of the same order as the hydration layers formed around each ion. Analogous to single electron transport resulting from ionic transport is observed for the first time. The current-voltage characteristics of such a device are similar to single electron transport in quantum dots. The current is blocked until a critical voltage, as the ions are trapped inside a hydration shell. The trapped ions have a high energy barrier compared to the applied input electrical voltage, preventing the ion to break free from the hydration shell. This region is called “Coulomb blockade region”. In this region, we observe zero transport of ions inside the nanopore. However, when the electrical voltage is beyond the critical voltage, the ion has sufficient energy to break free from the energy barrier created by the hydration shell to enter into the pore. Thus, the input voltage can control the transport of the ion inside the nanopore. The device therefore acts as a binary storage unit, storing 0 when no ion passes through the pore and storing 1 when a single ion passes through the pore. We therefore postulate that the device can be used for fluidic computing applications in chemistry and biology, mimicking a computer. Furthermore, the trapped ion stores a finite charge in the Coulomb blockade region; hence the device also acts a super capacitor.

Keywords: Graphene, single ion transport, Coulomb blockade, fluidic computer, super capacitor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713
172 Numerical Modelling of Shear Zone and Its Implications on Slope Instability at Letšeng Diamond Open Pit Mine, Lesotho

Authors: M. Ntšolo, D. Kalumba, N. Lefu, G. Letlatsa

Abstract:

Rock mass damage due to shear tectonic activity has been investigated largely in geoscience where fluid transport is of major interest. However, little has been studied on the effect of shear zones on rock mass behavior and its impact on stability of rock slopes. At Letšeng Diamonds open pit mine in Lesotho, the shear zone composed of sheared kimberlite material, calcite and altered basalt is forming part of the haul ramp into the main pit cut 3. The alarming rate at which the shear zone is deteriorating has triggered concerns about both local and global stability of pit the walls. This study presents the numerical modelling of the open pit slope affected by shear zone at Letšeng Diamond Mine (LDM). Analysis of the slope involved development of the slope model by using a two-dimensional finite element code RS2. Interfaces between shear zone and host rock were represented by special joint elements incorporated in the finite element code. The analysis of structural geological mapping data provided a good platform to understand the joint network. Major joints including shear zone were incorporated into the model for simulation. This approach proved successful by demonstrating that continuum modelling can be used to evaluate evolution of stresses, strain, plastic yielding and failure mechanisms that are consistent with field observations. Structural control due to geological shear zone structure proved to be important in its location, size and orientation. Furthermore, the model analyzed slope deformation and sliding possibility along shear zone interfaces. This type of approach can predict shear zone deformation and failure mechanism, hence mitigation strategies can be deployed for safety of human lives and property within mine pits.

Keywords: Numerical modeling, open pit mine, shear zone, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
171 Hydrogen and Diesel Combustion on a Single Cylinder Four Stroke Diesel Engine in Dual Fuel mode with Varying Injection Strategies

Authors: Probir Kumar Bose, Rahul Banerjee, Madhujit Deb

Abstract:

The present energy situation and the concerns about global warming has stimulated active research interest in non-petroleum, carbon free compounds and non-polluting fuels, particularly for transportation, power generation, and agricultural sectors. Environmental concerns and limited amount of petroleum fuels have caused interests in the development of alternative fuels for internal combustion (IC) engines. The petroleum crude reserves however, are declining and consumption of transport fuels particularly in the developing countries is increasing at high rates. Severe shortage of liquid fuels derived from petroleum may be faced in the second half of this century. Recently more and more stringent environmental regulations being enacted in the USA and Europe have led to the research and development activities on clean alternative fuels. Among the gaseous fuels hydrogen is considered to be one of the clean alternative fuel. Hydrogen is an interesting candidate for future internal combustion engine based power trains. In this experimental investigation, the performance and combustion analysis were carried out on a direct injection (DI) diesel engine using hydrogen with diesel following the TMI(Time Manifold Injection) technique at different injection timings of 10 degree,45 degree and 80 degree ATDC using an electronic control unit (ECU) and injection durations were controlled. Further, the tests have been carried out at a constant speed of 1500rpm at different load conditions and it can be observed that brake thermal efficiency increases with increase in load conditions with a maximum gain of 15% at full load conditions during all injection strategies of hydrogen. It was also observed that with the increase in hydrogen energy share BSEC started reducing and it reduced to a maximum of 9% as compared to baseline diesel at 10deg ATDC injection during maximum injection proving the exceptional combustion properties of hydrogen.

Keywords: Hydrogen, performance, combustion, alternative fuels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3403
170 Two-Level Identification of HVAC Consumers for Demand Response Potential Estimation Based on Setpoint Change

Authors: M. Naserian, M. Jooshaki, M. Fotuhi-Firuzabad, M. Hossein Mohammadi Sanjani, A. Oraee

Abstract:

In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a solution is presented to uncover consumers with high air conditioner demand among a large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.

Keywords: Data-driven analysis, demand response, direct load control, HVAC system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 226
169 Using the Monte Carlo Simulation to Predict the Assembly Yield

Authors: C. Chahin, M. C. Hsu, Y. H. Lin, C. Y. Huang

Abstract:

Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.

Keywords: Monte Carlo simulation, placement yield, PCBcharacterization, electronics assembly

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2162
168 Result Validation Analysis of Steel Testing Machines

Authors: Wasiu O. Ajagbe, Habeeb O. Hamzat, Waris A. Adebisi

Abstract:

Structural failures occur due to a number of reasons. These may include under design, poor workmanship, substandard materials, misleading laboratory tests and lots more. Reinforcing steel bar is an important construction material, hence its properties must be accurately known before being utilized in construction. Understanding this property involves carrying out mechanical tests prior to design and during construction to ascertain correlation using steel testing machine which is usually not readily available due to the location of project. This study was conducted to determine the reliability of reinforcing steel testing machines. Reconnaissance survey was conducted to identify laboratories where yield and ultimate tensile strengths tests can be carried out. Six laboratories were identified within Ibadan and environs. However, only four were functional at the time of the study. Three steel samples were tested for yield and tensile strengths, using a steel testing machine, at each of the four laboratories (LM, LO, LP and LS). The yield and tensile strength results obtained from the laboratories were compared with the manufacturer’s specification using a reliability analysis programme. Structured questionnaire was administered to the operators in each laboratory to consider their impact on the test results. The average value of manufacturers’ tensile strength and yield strength are 673.7 N/mm2 and 559.7 N/mm2 respectively. The tensile strength obtained from the four laboratories LM, LO, LP and LS are given as 579.4, 652.7, 646.0 and 649.9 N/mm2 respectively while their yield strengths respectively are 453.3, 597.0, 550.7 and 564.7 N/mm2. Minimum tensile to yield strength ratio is 1.08 for BS 4449: 2005 and 1.15 for ASTM A615. Tensile to yield strength ratio from the four laboratories are 1.28, 1.09, 1.17 and 1.15 for LM, LO, LP and LS respectively. The tensile to yield strength ratio shows that the result obtained from all the laboratories meet the code requirements used for the test. The result of the reliability test shows varying level of reliability between the manufacturers’ specification and the result obtained from the laboratories. Three of the laboratories; LO, LS and LP have high value of reliability with the manufacturer i.e. 0.798, 0.866 and 0.712 respectively. The fourth laboratory, LM has a reliability value of 0.100. Steel test should be carried out in a laboratory using the same code in which the structural design was carried out. More emphasis should be laid on the importance of code provisions.

Keywords: Reinforcing steel bars, reliability analysis, tensile strength, universal testing machine, yield strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743
167 Urban Corridor Management Strategy Based on Intelligent Transportation System

Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain

Abstract:

Intelligent Transportation System (ITS) is the application of technology for developing a user–friendly transportation system for urban areas in developing countries. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. This paper attempts to present the past studies regarding several ITS available that have been successfully deployed in urban corridors of India and abroad, and to know about the current scenario and the methodology considered for planning, design, and operation of Traffic Management Systems. This paper also presents the endeavor that was made to interpret and figure out the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of 6 lanes as well as 8 lanes divided road network. Two categories of data were collected on February 2016 such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, radar gun, mobile GPS and stopwatch. From analysis, the performance interpretations incorporated were identification of peak hours and off peak hours, congestion and level of service (LOS) at mid blocks, delay followed by the plotting speed contours and recommending urban corridor management strategies. From the analysis, it is found that ITS based urban corridor management strategies will be useful to reduce congestion, fuel consumption and pollution so as to provide comfort and efficiency to the users. The paper presented urban corridor management strategies based on sensors incorporated in both vehicles and on the roads.

Keywords: Congestion, ITS Strategies, Mobility, Safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
166 Multistage Condition Monitoring System of Aircraft Gas Turbine Engine

Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev

Abstract:

Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.

Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564
165 Identification of the Antimicrobial Effect of Liquorice Extracts on Gram-Positive Bacteria: Determination of Minimum Inhibitory Concentration and Mechanism of Action Using a luxABCDE Reporter Strain

Authors: Madiha El Awamie, Catherine Rees

Abstract:

Natural preservatives have been used as alternatives to traditional chemical preservatives; however, a limited number have been commercially developed and many remain to be investigated as sources of safer and effective antimicrobials. In this study, we have been investigating the antimicrobial activity of an extract of Glycyrrhiza glabra (liquorice) that was provided as a waste material from the production of liquorice flavourings for the food industry, and to investigate if this retained the expected antimicrobial activity so it could be used as a natural preservative. Antibacterial activity of liquorice extract was screened for evidence of growth inhibition against eight species of Gram-negative and Gram-positive bacteria, including Listeria monocytogenes, Listeria innocua, Staphylococcus aureus, Enterococcus faecalis and Bacillus subtilis. The Gram-negative bacteria tested include Pseudomonas aeruginosa, Escherichia coli and Salmonella typhimurium but none of these were affected by the extract. In contrast, for all of the Gram-positive bacteria tested, growth was inhibited as monitored using optical density. However parallel studies using viable count indicated that the cells were not killed meaning that the extract was bacteriostatic rather than bacteriocidal. The Minimum Inhibitory Concentration [MIC] and Minimum Bactericidal Concentration [MBC] of the extract was also determined and a concentration of 50 µg ml-1 was found to have a strong bacteriostatic effect on Gram-positive bacteria. Microscopic analysis indicated that there were changes in cell shape suggesting the cell wall was affected. In addition, the use of a reporter strain of Listeria transformed with the bioluminescence genes luxABCDE indicated that cell energy levels were reduced when treated with either 12.5 or 50 µg ml-1 of the extract, with the reduction in light output being proportional to the concentration of the extract used. Together these results suggest that the extract is inhibiting the growth of Gram-positive bacteria only by damaging the cell wall and/or membrane.

Keywords: Antibacterial activity, bioluminescence, Glycyrrhiza glabra, natural preservative.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1677
164 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation

Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri

Abstract:

In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm2 are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm2). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.

Keywords: Focused ultrasound therapy, Histotripsy, generation of inertial cavitation, mechanical tissue ablation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966
163 Verifying the Supremacy of Volume Modulated Arc Therapy Over Intensity Modulated Radiation Therapy: Pelvis Malignancies’ Perspective

Authors: M. Umar Farooq, T. Ahmad Afridi, M. Zia-Ul-Islam Arsalan, U. Hussain Haider, S. Ullah

Abstract:

Cancer, a leading fatal disease worldwide, can be treated with various techniques including radiation therapy. It involves the use of ionizing radiation to target cancer cells. On basis of source placement, radiation therapy is of two types i.e., Brachytherapy and External Beam Radiotherapy (EBRT). EBRT has evolved from 2-D conventional therapy to 3-D Conformal radiotherapy (3D-CRT) and then Intensity-Modulated Radiotherapy (IMRT). IMRT improves dose conformity and sparing of organs at risk. Volumetric Modulated Arc Therapy (VMAT) is a modern technique that uses treatment delivery in arcs with rotation of the gantry. In this report, a dosimetry comparison was performed between IMRT and VMAT. This study was conducted in the Radiotherapy Department of the Institute of Nuclear Medicine and Oncology Lahore (INMOL). Ten patients with Prostate Carcinoma were selected for this study to compare the methods. Simulation of these patients was done with help of a CT Simulator. All target volumes and organs were delineated by the oncologists. Then suitable fields/arcs were applied which cover volumes effectively. This was followed by the optimization of plans for both techniques for every patient. Finally, a comparison of evaluating parameters e.g., Conformity Index (CI), Volume Coverage, Homogeneity Index (HI), Organ Doses, and MUs (Monitor Units) was performed. We obtained better results of target conformity indices from VMAT (CI = 1.16) than IMRT (CI = 1.24). VMAT was better in organ sparing too. Also, VMAT shows fewer MUs (733 MUs) as compared to IMRT (2149 MUs). From this study, it is concluded that VMAT is a better treatment technique than IMRT. This technique will enhance treatment efficiency as it takes less time in obtaining the required results. Also, a very less scatter dose will be delivered to the patient.

Keywords: 2-D Conventional Radiotherapy, 3-D Conformal Radiotherapy, Intensity Modulated Radiotherapy, Prostate Carcinoma, Radiotherapy, Volumetric Modulated Arc Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 359
162 “Post-Industrial” Journalism as a Creative Industry

Authors: Lynette Sheridan Burns, Benjamin J. Matthews

Abstract:

The context of post-industrial journalism is one in which the material circumstances of mechanical publication have been displaced by digital technologies, increasing the distance between the orthodoxy of the newsroom and the culture of journalistic writing. Content is, with growing frequency, created for delivery via the internet, publication on web-based ‘platforms’ and consumption on screen media. In this environment, the question is not ‘who is a journalist?’ but ‘what is journalism?’ today. The changes bring into sharp relief new distinctions between journalistic work and journalistic labor, providing a key insight into the current transition between the industrial journalism of the 20th century, and the post-industrial journalism of the present. In the 20th century, the work of journalists and journalistic labor went hand-in-hand as most journalists were employees of news organizations, whilst in the 21st century evidence of a decoupling of ‘acts of journalism’ (work) and journalistic employment (labor) is beginning to appear. This 'decoupling' of the work and labor that underpins journalism practice is far reaching in its implications, not least for institutional structures. Under these conditions we are witnessing the emergence of expanded ‘entrepreneurial’ journalism, based on smaller, more independent and agile - if less stable - enterprise constructs that are a feature of creative industries. Entrepreneurial journalism is realized in a range of organizational forms from social enterprise, through to profit driven start-ups and hybrids of the two. In all instances, however, the primary motif of the organization is an ideological definition of journalism. An example is the Scoop Foundation for Public Interest Journalism in New Zealand, which owns and operates Scoop Publishing Limited, a not for profit company and social enterprise that publishes an independent news site that claims to have over 500,000 monthly users. Our paper demonstrates that this journalistic work meets the ideological definition of journalism; conducted within the creative industries using an innovative organizational structure that offers a new, viable post-industrial future for journalism.

Keywords: Creative industries, digital communication, journalism, post-industrial.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913
161 Similarity Solutions of Nonlinear Stretched Biomagnetic Flow and Heat Transfer with Signum Function and Temperature Power Law Geometries

Authors: M. G. Murtaza, E. E. Tzirtzilakis, M. Ferdows

Abstract:

Biomagnetic fluid dynamics is an interdisciplinary field comprising engineering, medicine, and biology. Bio fluid dynamics is directed towards finding and developing the solutions to some of the human body related diseases and disorders. This article describes the flow and heat transfer of two dimensional, steady, laminar, viscous and incompressible biomagnetic fluid over a non-linear stretching sheet in the presence of magnetic dipole. Our model is consistent with blood fluid namely biomagnetic fluid dynamics (BFD). This model based on the principles of ferrohydrodynamic (FHD). The temperature at the stretching surface is assumed to follow a power law variation, and stretching velocity is assumed to have a nonlinear form with signum function or sign function. The governing boundary layer equations with boundary conditions are simplified to couple higher order equations using usual transformations. Numerical solutions for the governing momentum and energy equations are obtained by efficient numerical techniques based on the common finite difference method with central differencing, on a tridiagonal matrix manipulation and on an iterative procedure. Computations are performed for a wide range of the governing parameters such as magnetic field parameter, power law exponent temperature parameter, and other involved parameters and the effect of these parameters on the velocity and temperature field is presented. It is observed that for different values of the magnetic parameter, the velocity distribution decreases while temperature distribution increases. Besides, the finite difference solutions results for skin-friction coefficient and rate of heat transfer are discussed. This study will have an important bearing on a high targeting efficiency, a high magnetic field is required in the targeted body compartment.

Keywords: Biomagnetic fluid, FHD, nonlinear stretching sheet, slip parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 817
160 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: Building stock energy modelling, energy-savings, archetype.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 735