Search results for: large array
196 Comparative in silico and in vitro Study of N-(1- Methyl-2-Oxo-2-N-Methyl Anilino-Ethyl) Benzene Sulfonamide and Its Analogues as an Anticancer Agent
Authors: Pamita Awasthi, Kirna, Shilpa Dogra, Manu Vatsal, Ritu Barthwal
Abstract:
Doxorubicin, also known as Adriamycin, is an anthracycline class of drug used in cancer chemotherapy. It is used in the treatment of non-Hodgkin’s lymphoma, multiple myeloma, acute leukemia, breast cancer, lung cancer, endometrium cancer and ovary cancers. It functions via intercalating DNA and ultimately killing cancer cells. The major side effects of doxorubicin are hair loss, myelosuppression, nausea & vomiting, oesophagitis, diarrhea, heart damage and liver dysfunction. The minor modifications in the structure of compound exhibit large variation in the biological activity, has prompted us to carry out the synthesis of sulfonamide derivatives. Sulfonamide is an important feature with broad spectrum of biological activity such as antiviral, antifungal, diuretics, antiinflammatory, antibacterial and anticancer activities. Structure of the synthesized compound N-(1-methyl-2-oxo-2-N-methyl anilinoethyl) benzene sulfonamide confirmed by proton nuclear magnetic resonance (1H NMR),13C NMR, Mass and FTIR spectroscopic tools to assure the position of all protons and hence stereochemistry of the molecule. Further we have reported the binding potential of synthesized sulfonamide analogues in comparison to doxorubicin drug using Auto Dock 4.2 software. Computational binding energy (B.E.) and inhibitory constant (Ki) has been evaluated for the synthesized compound in comparison of doxorubicin against Poly (dA-dT).Poly (dA-dT) and Poly (dG-dC).Poly (dG-dC) sequences. The in vitro cytotoxic study against human breast cancer cell lines confirms the better anticancer activity of the synthesized compound over currently in use anticancer drug doxorubicin. The IC50 value of the synthesized compound is 7.12 μM whereas for doxorubicin is 7.2 μM.
Keywords: Anticancer, Auto Dock, Doxorubicin, Sulfonamide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341195 Drop Impact Study on Flexible Superhydrophobic Surface Containing Micro-Nano Hierarchical Structures
Authors: Abinash Tripathy, Girish Muralidharan, Amitava Pramanik, Prosenjit Sen
Abstract:
Superhydrophobic surfaces are abundant in nature. Several surfaces such as wings of butterfly, legs of water strider, feet of gecko and the lotus leaf show extreme water repellence behaviour. Self-cleaning, stain-free fabrics, spill-resistant protective wears, drag reduction in micro-fluidic devices etc. are few applications of superhydrophobic surfaces. In order to design robust superhydrophobic surface, it is important to understand the interaction of water with superhydrophobic surface textures. In this work, we report a simple coating method for creating large-scale flexible superhydrophobic paper surface. The surface consists of multiple layers of silanized zirconia microparticles decorated with zirconia nanoparticles. Water contact angle as high as 159±10 and contact angle hysteresis less than 80 was observed. Drop impact studies on superhydrophobic paper surface were carried out by impinging water droplet and capturing its dynamics through high speed imaging. During the drop impact, the Weber number was varied from 20 to 80 by altering the impact velocity of the drop and the parameters such as contact time, normalized spread diameter were obtained. In contrast to earlier literature reports, we observed contact time to be dependent on impact velocity on superhydrophobic surface. Total contact time was split into two components as spread time and recoil time. The recoil time was found to be dependent on the impact velocity while the spread time on the surface did not show much variation with the impact velocity. Further, normalized spreading parameter was found to increase with increase in impact velocity.
Keywords: Contact angle, contact angle hysteresis, contact time, superhydrophobic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1405194 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan
Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid
Abstract:
In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.
Keywords: Data quality, null hypothesis, seismic lines, seismic reflection survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 615193 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: Logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018192 Biaxial Testing of Fabrics - A Comparison of Various Testing Methodologies
Authors: O.B. Ozipek, E. Bozdag, E. Sunbuloglu, A. Abdullahoglu, E. Belen, E. Celikkanat
Abstract:
In textile industry, besides the conventional textile products, technical textile goods, that have been brought external functional properties into, are being developed for technical textile industry. Especially these products produced with weaving technology are widely preferred in areas such as sports, geology, medical, automotive, construction and marine sectors. These textile products are exposed to various stresses and large deformations under typical conditions of use. At this point, sufficient and reliable data could not be obtained with uniaxial tensile tests for determination of the mechanical properties of such products due to mainly biaxial stress state. Therefore, the most preferred method is a biaxial tensile test method and analysis. These tests and analysis is applied to fabrics with different functional features in order to establish the textile material with several characteristics and mechanical properties of the product. Planar biaxial tensile test, cylindrical inflation and bulge tests are generally required to apply for textile products that are used in automotive, sailing and sports areas and construction industry to minimize accidents as long as their service life. Airbags, seat belts and car tires in the automotive sector are also subject to the same biaxial stress states, and can be characterized by same types of experiments. In this study, in accordance with the research literature related to the various biaxial test methods are compared. Results with discussions are elaborated mainly focusing on the design of a biaxial test apparatus to obtain applicable experimental data for developing a finite element model. Sample experimental results on a prototype system are expressed.Keywords: Biaxial Stress, Bulge Test, Cylindrical Inflation, Fabric Testing, Planar Tension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4148191 Multipath Routing Sensor Network for Finding Crack in Metallic Structure Using Fuzzy Logic
Authors: Dulal Acharjee, Punyaban Patel
Abstract:
For collecting data from all sensor nodes, some changes in Dynamic Source Routing (DSR) protocol is proposed. At each hop level, route-ranking technique is used for distributing packets to different selected routes dynamically. For calculating rank of a route, different parameters like: delay, residual energy and probability of packet loss are used. A hybrid topology of DMPR(Disjoint Multi Path Routing) and MMPR(Meshed Multi Path Routing) is formed, where braided topology is used in different faulty zones of network. For reducing energy consumption, variant transmission ranges is used instead of fixed transmission range. For reducing number of packet drop, a fuzzy logic inference scheme is used to insert different types of delays dynamically. A rule based system infers membership function strength which is used to calculate the final delay amount to be inserted into each of the node at different clusters. In braided path, a proposed 'Dual Line ACK Link'scheme is proposed for sending ACK signal from a damaged node or link to a parent node to ensure that any error in link or any node-failure message may not be lost anyway. This paper tries to design the theoretical aspects of a model which may be applied for collecting data from any large hanging iron structure with the help of wireless sensor network. But analyzing these data is the subject of material science and civil structural construction technology, that part is out of scope of this paper.Keywords: Metallic corrosion, Multi Path Routing, DisjointMPR, Meshed MPR, braided path, dual line ACK link, route rankingand Fuzzy Logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519190 The Relationship between Fluctuation of Biological Signal: Finger Plethysmogram in Conversation and Anthropophobic Tendency
Authors: Haruo Okabayashi
Abstract:
Human biological signals (pulse wave and brain wave, etc.) have a rhythm which shows fluctuations. This study investigates the relationship between fluctuations of biological signals which are shown by a finger plethysmogram (i.e., finger pulse wave) in conversation and anthropophobic tendency, and identifies whether the fluctuation could be an index of mental health. 32 college students participated in the experiment. The finger plethysmogram of each subject was measured in the following conversation situations: Fun memory talking/listening situation and regrettable memory talking/ listening situation for three minutes each. Lyspect 3.5 was used to collect the data of the finger plethysmogram. Since Lyspect calculates the Lyapunov spectrum, it is possible to obtain the largest Lyapunov exponent (LLE). LLE is an indicator of the fluctuation and shows the degree to which a measure is going away from close proximity to the track in a dynamical system. Before the finger plethysmogram experiment, each participant took the psychological test questionnaire “Anthropophobic Scale.” The scale measures the social phobia trend close to the consciousness of social phobia. It is revealed that there is a remarkable relationship between the fluctuation of the finger plethysmography and anthropophobic tendency scale in talking about a regrettable story in conversation: The participants (N=15) who have a low anthropophobic tendency show significantly more fluctuation of finger pulse waves than the participants (N=17) who have a high anthropophobic tendency (F (1, 31) =5.66, p<0.05). That is, the participants who have a low anthropophobic tendency make conversation flexibly using large fluctuation of biological signal; on the other hand, the participants who have a high anthropophobic tendency constrain a conversation because of small fluctuation. Therefore, fluctuation is not an error but an important drive to make better relationships with others and go towards the development of interaction. In considering mental health, the fluctuation of biological signals would be an important indicator.
Keywords: Anthropophobic tendency, finger plethymogram, fluctuation of biological signal, LLE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332189 Tactical Urbanism and Sustainability: Tactical Experiences in the Promotion of Active Transportation
Authors: Aline Fernandes Barata, Adriana Sansão Fontes
Abstract:
The overvaluation of the use of automobile has detrimentally affected the importance of pedestrians within the city and consequently its public spaces. As a way of treating contemporary urban paradigms, Tactical Urbanism aims to recover and activate spaces through fast and easily-applied actions that demonstrate the possibility of large-scale and long-term changes in cities. Tactical interventions have represented an important practice of redefining public spaces and urban mobility. The concept of Active Transportation coheres with the idea of sustainable urban mobility, characterizing the means of transportation through human propulsion, such as walking and cycling. This paper aims to debate the potential of Tactical Urbanism in promoting Active Transportation by revealing opportunities of transformation in the urban space of contemporary cities through initiatives that promote the protection and valorization of the presence of pedestrians and cyclists in cities, and that subvert the importance of motorized vehicles. In this paper, we present the character of these actions in two different ways: when they are used as tests for permanent interventions and when they have pre-defined start and end periods. Using recent initiatives to illustrate, we aim to discuss the role of small-scale actions in promoting and incentivizing a more active, healthy, sustainable and responsive urban way of life, presenting how some of them have developed through public policies. For that, we will present some examples of tactical actions that illustrate the encouragement of Active Transportation and trials to balance the urban opportunities for pedestrians and cyclists. These include temporary closure of streets, the creation of new alternatives and more comfortable areas for walking and cycling, and the subversion of uses in public spaces where the usage of cars are predominant.
Keywords: Tactical urbanism, active transportation, sustainable mobility, non-motorized means.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357188 Re-Visiting Rumi and Iqbal on Self-Enhancement for Social Responsibility
Authors: Javed Y. Uppal
Abstract:
The background of this study is the great degree of stress that the world is experiencing today, internationally among the countries, within a community among people, and even individually within one’s own self. The significance of the study is the attempt to find a solution of this stress in the philosophy of the olden times of Jalaluddin Rumi and comparatively recently of that of Allama Iqbal. The methodology adopted in this paper is firstly exploration of the perspectives of these philosophers that are being consolidated by a number of psychic and spiritual experts of today, who are being widely read but less followed. This paper further goes on presenting brief life sketches of Rumi and Iqbal. It expounds the key concepts proposed by them and the social change that was resulted in the times of the two above mentioned metaphysical philosophers. It is further amplified that with the recent advancements, in both metaphysics and the physical sciences, the gap between the two is closing down. Both Rumi and Iqbal emphasized their common essence. The old time's concepts, postulates, and philosophies are hence once again becoming valid. The findings of this paper are that the existence of human empathy, affection and mutual social attraction among humans is still valid. The positive inner belief system that dictates our thoughts and actions is vital. As a conclusion, empathy should enable us solving our problems collectively. We need to strengthen our inner communication system, to listen to the messages that come to our inner-selves. We need to get guidance and strength from them. We need to value common needs and purposes collectively to achieve results. Spiritual energy among us is to be harnessed and utilized. Connectivity is to be recognized to unify and strengthen ties among people. Mutual bonding at small and large group levels is to be employed for the survival of the disadvantaged, and sustainability of the empowering trends. With the above guidelines, hopefully, we can define a framework towards a brave and happy new humane world.
Keywords: Belief system, connectivity, human empathy, inner-self, mutual bonding, spiritual energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 963187 The Future of Hospitals: A Systematic Review in the Field of Architectural Design with a Disruptive Research and Development Approach
Authors: María Araya Léon, Ainoa Abella, Aura Murillo, Ricardo Guasch, Laura Clèries
Abstract:
This article aims to examine scientific theory framed within the term hospitals of the future from a multidisciplinary and cross-sectional perspective. To understand the connection that the various cross-sectional areas, we studied have with architectural spaces and to determine the future outlook of the works examined and how they can be classified into the categories of need/solution, evolution/revolution, collective/individual, and preventive/corrective. The changes currently taking place within the context of healthcare demonstrate how important these projects are and the need for companies to face future changes. A systematic review has been carried out focused on what will the hospitals of the future be like in relation to the elements that form part of their use, design, and architectural space experience, using the WOS database from 2016 to 2019. The large number of works about sensoring & big data and the scarce amount related to the area of materials is worth highlighting. Furthermore, no growth concerning future issues is envisaged over time. Regarding classifications, the articles we reviewed address evolutionary and collective solutions more, and in terms of preventive and corrective solutions, they were found at a similar level. Although our research focused on the future of hospitals, there is little evidence representing this approach. We also detected that, given the relevance of the research on how the built environment influences human health and well-being, these studies should be promoted within the context of healthcare. This article allows to find evidence on the future perspective from within the domain of hospital architecture, in order to create bridges between the productive sector of architecture and scientific theory. This will make it possible to detect R&D opportunities in each analyzed cross-section.
Keywords: Hospitals, trends, architectural space, disruptive approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 299186 Evaluation of the Analytic for Hemodynamic Instability as A Prediction Tool for Early Identification of Patient Deterioration
Authors: Bryce Benson, Sooin Lee, Ashwin Belle
Abstract:
Unrecognized or delayed identification of patient deterioration is a key cause of in-hospitals adverse events. Clinicians rely on vital signs monitoring to recognize patient deterioration. However, due to ever increasing nursing workloads and the manual effort required, vital signs tend to be measured and recorded intermittently, and inconsistently causing large gaps during patient monitoring. Additionally, during deterioration, the body’s autonomic nervous system activates compensatory mechanisms causing the vital signs to be lagging indicators of underlying hemodynamic decline. This study analyzes the predictive efficacy of the Analytic for Hemodynamic Instability (AHI) system, an automated tool that was designed to help clinicians in early identification of deteriorating patients. The lead time analysis in this retrospective observational study assesses how far in advance AHI predicted deterioration prior to the start of an episode of hemodynamic instability (HI) becoming evident through vital signs? Results indicate that of the 362 episodes of HI in this study, 308 episodes (85%) were correctly predicted by the AHI system with a median lead time of 57 minutes and an average of 4 hours (240.5 minutes). Of the 54 episodes not predicted, AHI detected 45 of them while the episode of HI was ongoing. Of the 9 undetected, 5 were not detected by AHI due to either missing or noisy input ECG data during the episode of HI. In total, AHI was able to either predict or detect 98.9% of all episodes of HI in this study. These results suggest that AHI could provide an additional ‘pair of eyes’ on patients, continuously filling the monitoring gaps and consequently giving the patient care team the ability to be far more proactive in patient monitoring and adverse event management.
Keywords: Clinical deterioration prediction, decision support system, early warning system, hemodynamic status, physiologic monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 450185 The Influence of Forest Management Histories on Dead Wood and Habitat Trees in the Old Growth Forest in Northern Iran
Authors: Kiomars Sefidi
Abstract:
Dead wood and habitat tree such as fallen logs, snags, stumps and cracks and loos bark etc. are regarded as an important ecological component of forests on which many forest dwelling species depend on presence of them within forest ecosystems. Meanwhile its relation to management history in Caspian forest has gone unreported. The aim of research was to compare the amounts of dead wood and habitat trees in the forests with historically different intensities of management, including: forests with the long term implication of management (PS), the short term implication of management (NS) which were compared with semi virgin forest (GS). The number of 405 individual dead and habitat trees were recorded and measured at 109 sampling locations. ANOVA revealed volume of dead tree in the form and decay classes significantly differ within sites and dead volume in the semi virgin forest significantly higher than managed sites. Comparing the amount of dead and habitat tree in three sites showed that, dead tree volume related with management history and significantly differ in three study sites. Meanwhile, frequency of habitat trees was significantly different within sites. The highest amount of habitat trees including cavities, cracks and loose bark and fork split trees was recorded in virgin site and lowest recorded in the sites with the long term implication of management. It can be concluded that forest management cause reduction of the amount of dead and habitat tree specially in a large size, thus managing this forest according to ecological sustainable principles require a commitment to maintaining stand structure that allow, continued generation of dead trees in a full range of size.
Keywords: Cracks trees, forest biodiversity, fork split trees, nature conservation, sustainable management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720184 Long-Term Durability of Roller-Compacted Concrete Pavement
Authors: Jun Hee Lee, Young Kyu Kim, Seong Jae Hong, Chamroeun Chhorn, Seung Woo Lee
Abstract:
Roller-compacted concrete pavement (RCCP), an environmental friendly pavement of which load carry capacity benefitted from both hydration and aggregate interlock from roller compacting, demonstrated a superb structural performance for a relatively small amount of water and cement content. Even though an excellent structural performance can be secured, it is required to investigate roller-compacted concrete (RCC) under environmental loading and its long-term durability under critical conditions. In order to secure long-term durability, an appropriate internal air-void structure is required for this concrete. In this study, a method for improving the long-term durability of RCCP is suggested by analyzing the internal air-void structure and corresponding durability of RCC. The method of improving the long-term durability involves measurements of air content, air voids, and air-spacing factors in RCC that experiences changes in terms of type of air-entraining agent and its usage amount. This test is conducted according to the testing criteria in ASTM C 457, 672, and KS F 2456. It was found that the freezing-thawing and scaling resistances of RCC without any chemical admixture was quite low. Interestingly, an improvement of freezing-thawing and scaling resistances was observed for RCC with appropriate the air entraining (AE) agent content; Relative dynamic elastic modulus was found to be more than 80% for those mixtures. In RCC with AE agent mixtures, large amount of air was distributed within a range of 2% to 3%, and an air void spacing factor ranging between 200 and 300 μm (close to 250 μm, recommended by PCA) was secured. The long-term durability of RCC has a direct relationship with air-void spacing factor, and thus it can only be secured by ensuring the air void spacing factor through the inclusion of the AE in the mixture.
Keywords: RCCP, durability, air spacing factor, surface scaling resistance test, freezing and thawing resistance test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171183 Seismic Behaviour of RC Knee Joints in Closing and Opening Actions
Authors: S. Mogili, J. S. Kuang, N. Zhang
Abstract:
Knee joints, the beam column connections found at the roof level of a moment resisting frame buildings, are inherently different from conventional interior and exterior beam column connections in the way that forces from adjoining members are transferred into joint and then resisted by the joint. A knee connection has two distinct load resisting mechanisms, each for closing and opening actions acting simultaneously under reversed cyclic loading. In spite of many distinct differences in the behaviour of shear resistance in knee joints, there are no special design provisions in the major design codes available across the world due to lack of in-depth research on the knee connections. To understand the relative importance of opening and closing actions in design, it is imperative to study knee joints under varying shear stresses, especially at higher opening-to-closing shear stress ratios. Three knee joint specimens, under different input shear stresses, were designed to produce a varying ratio of input opening to closing shear stresses. The design was carried out in such a way that the ratio of flexural strength of beams with consideration of axial forces in opening to closing actions are maintained at 0.5, 0.7, and 1.0, thereby resulting in the required variation of opening to closing joint shear stress ratios among the specimens. The behaviour of these specimens was then carefully studied in terms of closing and opening capacities, hysteretic behaviour, and envelope curves to understand the differences in joint performance based on which an attempt to suggest design guidelines for knee joints is made emphasizing the relative importance of opening and closing actions. Specimens with relatively higher opening stresses were observed to be more vulnerable under the action of seismic loading.
Keywords: Knee-joints, large-scale testing, opening and closing shear stresses, seismic performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1342182 A Modelling Study of the Photochemical and Particulate Pollution Characteristics above a Typical Southeast Mediterranean Urban Area
Authors: Kiriaki-Maria Fameli, Vasiliki D. Assimakopoulos, Vasiliki Kotroni
Abstract:
The Greater Athens Area (GAA) faces photochemical and particulate pollution episodes as a result of the combined effects of local pollutant emissions, regional pollution transport, synoptic circulation and topographic characteristics. The area has undergone significant changes since the Athens 2004 Olympic Games because of large scale infrastructure works that lead to the shift of population to areas previously characterized as rural, the increase of the traffic fleet and the operation of highways. However, few recent modelling studies have been performed due to the lack of an accurate, updated emission inventory. The photochemical modelling system MM5/CAMx was applied in order to study the photochemical and particulate pollution characteristics above the GAA for two distinct ten-day periods in the summer of 2006 and 2010, where air pollution episodes occurred. A new updated emission inventory was used based on official data. Comparison of modeled results with measurements revealed the importance and accuracy of the new Athens emission inventory as compared to previous modeling studies. The model managed to reproduce the local meteorological conditions, the daily ozone and particulates fluctuations at different locations across the GAA. Higher ozone levels were found at suburban and rural areas as well as over the sea at the south of the basin. Concerning PM10, high concentrations were computed at the city centre and the southeastern suburbs in agreement with measured data. Source apportionment analysis showed that different sources contribute to the ozone levels, the local sources (traffic, port activities) affecting its formation.Keywords: Photochemical modelling, urban pollution, greater Athens area, MM5/CAMx.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367181 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment
Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar
Abstract:
Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.Keywords: SLB, DLB, Host, Algorithm and Load.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657180 Identification of Promiscuous Epitopes for Cellular Immune Responses in the Major Antigenic Protein Rv3873 Encoded by Region of Difference 1 of Mycobacterium tuberculosis
Authors: Abu Salim Mustafa
Abstract:
Rv3873 is a relatively large size protein (371 amino acids in length) and its gene is located in the immunodominant genomic region of difference (RD)1 that is present in the genome of Mycobacterium tuberculosis but deleted from the genomes of all the vaccine strains of Bacillus Calmette Guerin (BCG) and most other mycobacteria. However, when tested for cellular immune responses using peripheral blood mononuclear cells from tuberculosis patients and BCG-vaccinated healthy subjects, this protein was found to be a major stimulator of cell mediated immune responses in both groups of subjects. In order to further identify the sequence of immunodominant epitopes and explore their Human Leukocyte Antigen (HLA)-restriction for epitope recognition, 24 peptides (25-mers overlapping with the neighboring peptides by 10 residues) covering the sequence of Rv3873 were synthesized chemically using fluorenylmethyloxycarbonyl chemistry and tested in cell mediated immune responses. The results of these experiments helped in the identification of an immunodominant peptide P9 that was recognized by people expressing varying HLA-DR types. Furthermore, it was also predicted to be a promiscuous binder with multiple epitopes for binding to HLA-DR, HLA-DP and HLA-DQ alleles of HLA-class II molecules that present antigens to T helper cells, and to HLA-class I molecules that present antigens to T cytotoxic cells. In addition, the evaluation of peptide P9 using an immunogenicity predictor server yielded a high score (0.94), which indicated a greater probability of this peptide to elicit a protective cellular immune response. In conclusion, P9, a peptide with multiple epitopes and ability to bind several HLA class I and class II molecules for presentation to cells of the cellular immune response, may be useful as a peptide-based vaccine against tuberculosis.
Keywords: Mycobacterium tuberculosis, Rv3873, peptides, vaccine
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845179 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks
Authors: Ashkan Ebadi, Adam Krzyzak
Abstract:
Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.
Keywords: Tourism, hotel recommender system, hybrid, implicit features.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900178 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs
Authors: Muhammad Yasir Wadood, Fatemeh Babaeian
Abstract:
By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.
Keywords: Band-pass filters, inter-digital filter, microstrip, via-less.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 834177 Factors Affecting M-Government Deployment and Adoption
Authors: Saif Obaid Alkaabi, Nabil Ayad
Abstract:
Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.Keywords: E-government, m-government, system dependability, system security, trust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772176 Morphemic Analysis Awareness: A Boon or Bane on ESL Students’ Vocabulary Learning Strategy
Authors: Chandrakala Varatharajoo, Adelina Binti Asmawi, Nabeel Abdallah Mohammad Abedalaziz
Abstract:
This study investigated the impact of inflectional and derivational morphemic analysis awareness on ESL secondary school students’ vocabulary learning strategy. The quasi-experimental study was conducted with 106 low proficiency secondary school students in two experimental groups (inflectional and derivational) and one control group. The students’ vocabulary acquisition was assessed through two measures: Morphemic Analysis Test and Vocabulary- Morphemic Test in the pretest and posttest before and after an intervention programme. Results of ANCOVA revealed that both the experimental groups achieved a significant score in Morphemic Analysis Test and Vocabulary-Morphemic Test. However, the inflectional group obtained a fairly higher score than the derivational group. Thus, the results indicated that ESL low proficiency secondary school students performed better on inflectional morphemic awareness as compared to derivatives. The results also showed that the awareness of inflectional morphology contributed more on the vocabulary acquisition. Importantly, learning inflectional morphology can help ESL low proficiency secondary school students to develop both morphemic awareness and vocabulary gain. Theoretically, these findings show that not all morphemes are equally useful to students for their language development. Practically, these findings indicate that morphological instruction should at least be included in remediation and instructional efforts with struggling learners across all grade levels, allowing them to focus on meaning within the word before they attempt the text in large for better comprehension. Also, by methodologically, by conducting individualized intervention and assessment this study provided fresh empirical evidence to support the existing literature on morphemic analysis awareness and vocabulary learning strategy. Thus, a major pedagogical implication of the study is that morphemic analysis awareness strategy is a definite boon for ESL secondary school students in learning English vocabulary.
Keywords: ESL, instruction, morphemic analysis, vocabulary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2907175 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm
Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan
Abstract:
Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.Keywords: Cryptography, Steganography, Portable ExecutableFile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802174 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach
Authors: Yusuf Garba Baba
Abstract:
The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.
Keywords: Risk identification, risk assessment, analytical hierarchical process, multi-criteria decision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 734173 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow
Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen
Abstract:
Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.
Keywords: Flux limiters, MILES, OpenFOAM, turbulence structures, TVD schemes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1124172 Preparation, Characterisation, and Measurement of the in vitro Cytotoxicity of Mesoporous Silica Nanoparticles Loaded with Cytotoxic Pt(II) Oxadiazoline Complexes
Authors: G. Wagner, R. Herrmann
Abstract:
Cytotoxic platinum compounds play a major role in the chemotherapy of a large number of human cancers. However, due to the severe side effects for the patient and other problems associated with their use, there is a need for the development of more efficient drugs and new methods for their selective delivery to the tumours. One way to achieve the latter could be in the use of nanoparticular substrates that can adsorb or chemically bind the drug. In the cell, the drug is supposed to be slowly released, either by physical desorption or by dissolution of the particle framework. Ideally, the cytotoxic properties of the platinum drug unfold only then, in the cancer cell and over a longer period of time due to the gradual release. In this paper, we report on our first steps in this direction. The binding properties of a series of cytotoxic Pt(II) oxadiazoline compounds to mesoporous silica particles has been studied by NMR and UV/vis spectroscopy. High loadings were achieved when the Pt(II) compound was relatively polar, and has been dissolved in a relatively nonpolar solvent before the silica was added. Typically, 6-10 hours were required for complete equilibration, suggesting the adsorption did not only occur to the outer surface but also to the interior of the pores. The untreated and Pt(II) loaded particles were characterised by C, H, N combustion analysis, BET/BJH nitrogen sorption, electron microscopy (REM and TEM) and EDX. With the latter methods we were able to demonstrate the homogenous distribution of the Pt(II) compound on and in the silica particles, and no Pt(II) bulk precipitate had formed. The in vitro cytotoxicity in a human cancer cell line (HeLa) has been determined for one of the new platinum compounds adsorbed to mesoporous silica particles of different size, and compared with the corresponding compound in solution. The IC50 data are similar in all cases, suggesting that the release of the Pt(II) compound was relatively fast and possibly occurred before the particles reached the cells. Overall, the platinum drug is chemically stable on silica and retained its activity upon prolonged storage.Keywords: Cytotoxicity, mesoporous silica, nanoparticles platinum compounds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643171 Introductory Design Optimisation of a Machine Tool using a Virtual Machine Concept
Authors: Johan Wall, Johan Fredin, Anders Jönsson, Göran Broman
Abstract:
Designing modern machine tools is a complex task. A simulation tool to aid the design work, a virtual machine, has therefore been developed in earlier work. The virtual machine considers the interaction between the mechanics of the machine (including structural flexibility) and the control system. This paper exemplifies the usefulness of the virtual machine as a tool for product development. An optimisation study is conducted aiming at improving the existing design of a machine tool regarding weight and manufacturing accuracy at maintained manufacturing speed. The problem can be categorised as constrained multidisciplinary multiobjective multivariable optimisation. Parameters of the control and geometric quantities of the machine are used as design variables. This results in a mix of continuous and discrete variables and an optimisation approach using a genetic algorithm is therefore deployed. The accuracy objective is evaluated according to international standards. The complete systems model shows nondeterministic behaviour. A strategy to handle this based on statistical analysis is suggested. The weight of the main moving parts is reduced by more than 30 per cent and the manufacturing accuracy is improvement by more than 60 per cent compared to the original design, with no reduction in manufacturing speed. It is also shown that interaction effects exist between the mechanics and the control, i.e. this improvement would most likely not been possible with a conventional sequential design approach within the same time, cost and general resource frame. This indicates the potential of the virtual machine concept for contributing to improved efficiency of both complex products and the development process for such products. Companies incorporating such advanced simulation tools in their product development could thus improve its own competitiveness as well as contribute to improved resource efficiency of society at large.Keywords: Machine tools, Mechatronics, Non-deterministic, Optimisation, Product development, Virtual machine
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967170 Influence of Dynamic Loads in the Structural Integrity of Underground Rooms
Authors: M. Inmaculada Alvarez-Fernández, Celestino González-Nicieza, M. Belén Prendes-Gero, Fernando López-Gayarre
Abstract:
Among many factors affecting the stability of mining excavations, rock-bursts and tremors play a special role. These dynamic loads occur practically always and have different sources of generation. The most important of them is the commonly used mining technique, which disintegrates a certain area of the rock mass not only in the area of the planned mining, but also creates waves that significantly exceed this area affecting the structural elements. In this work it is analysed the consequences of dynamic loads over the structural elements in an underground room and pillar mine to avoid roof instabilities. With this end, dynamic loads were evaluated through in situ and laboratory tests and simulated with numerical modelling. Initially, the geotechnical characterization of all materials was carried out by mean of large-scale tests. Then, drill holes were done on the roof of the mine and were monitored to determine possible discontinuities in it. Three seismic stations and a triaxial accelerometer were employed to measure the vibrations from blasting tests, establish the dynamic behaviour of roof and pillars and develop the transmission laws. At last, computer simulations by FLAC3D software were done to check the effect of vibrations on the stability of the roofs. The study shows that in-situ tests have a greater reliability than laboratory samples because of eliminating the effect of heterogeneities, that the pillars work decreasing the amplitude of the vibration around them, and that the tensile strength of a beam and depending on its span is overcome with waves in phase and delayed. The obtained transmission law allows designing a blasting which guarantees safety and prevents the risk of future failures.
Keywords: Dynamic modelling, long term instability risks, room and pillar, seismic collapse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 482169 'Performance-Based' Seismic Methodology and Its Application in Seismic Design of Reinforced Concrete Structures
Authors: Jelena R. Pejović, Nina N. Serdar
Abstract:
This paper presents an analysis of the “Performance-Based” seismic design method, in order to overcome the perceived disadvantages and limitations of the existing seismic design approach based on force, in engineering practice. Bearing in mind, the specificity of the earthquake as a load and the fact that the seismic resistance of the structures solely depends on its behaviour in the nonlinear field, traditional seismic design approach based on force and linear analysis is not adequate. “Performance-Based” seismic design method is based on nonlinear analysis and can be used in everyday engineering practice. This paper presents the application of this method to eight-story high reinforced concrete building with combined structural system (reinforced concrete frame structural system in one direction and reinforced concrete ductile wall system in other direction). The nonlinear time-history analysis is performed on the spatial model of the structure using program Perform 3D, where the structure is exposed to forty real earthquake records. For considered building, large number of results were obtained. It was concluded that using this method we could, with a high degree of reliability, evaluate structural behavior under earthquake. It is obtained significant differences in the response of structures to various earthquake records. Also analysis showed that frame structural system had not performed well at the effect of earthquake records on soil like sand and gravel, while a ductile wall system had a satisfactory behavior on different types of soils.
Keywords: Ductile wall, frame system, nonlinear time-history analysis, performance-based methodology, RC building.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495168 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods
Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin
Abstract:
Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.
Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 516167 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290