Search results for: account aggregation
659 Urban Furniture in a New Setting of Public Spaces within the Kurdistan Region: Educational Targets and Course Design Process
Authors: Sinisa Prvanov
Abstract:
This research is an attempt to analyze the existing urban form of outdoor public space of Duhok city and to give proposals for their improvements in terms of urban seating. The aim of this research is to identify the main urban furniture elements and behaviour of users of three central parks of Duhok city, recognizing their functionality and the most common errors. Citizens needs, directly related to the physical characteristics of the environment, are categorized in terms of contact with nature. Parks as significant urban environments express their aesthetic preferences, as well as the need for recreation and play. Citizens around the world desire to contact with nature and places where they can socialize, play and practice different activities, but also participate in building their community and feeling the identity of their cities. The aim of this research is also to reintegrate these spaces in the wider urban context of the city of Duhok, to develop new functions by designing new seating patterns, more improved urban furniture, and necessary supporting facilities and equipment. Urban furniture is a product that uses an enormous number of people in public space. It has a high level of wear and damage due to intense use, exposure to sunlight and weather conditions. Iraq has a hot and dry climate characterized by long, warm, dry summers and short, cold winters. The climate is determined by the Iraq location at the crossroads of Arab desert areas and the subtropical humid climate of the Persian Gulf. The second part of this analysis will describe the possibilities of traditional and contemporary materials as well as their advantages in urban furniture production, providing users protection from extreme local climate conditions, but also taking into account solidities and unwelcome consequences, such as vandalism. In addition, this research represents a preliminary stage in the development of IND307 furniture design course for needs of the Department of Interior design, at the American University in Duhok. Based on results obtained in this research, the course would present a symbiosis between people and technology, promotion of new street furniture design that perceives pedestrian activities in an urban setting, and practical use of anthropometric measurements as a tool for technical innovations.Keywords: Furniture design, Street furniture, Social interaction, Public space
Procedia PDF Downloads 136658 Machine Translation Analysis of Chinese Dish Names
Authors: Xinyu Zhang, Olga Torres-Hostench
Abstract:
This article presents a comparative study evaluating and comparing the quality of machine translation (MT) output of Chinese gastronomy nomenclature. Chinese gastronomic culture is experiencing an increased international acknowledgment nowadays. The nomenclature of Chinese gastronomy not only reflects a specific aspect of culture, but it is related to other areas of society such as philosophy, traditional medicine, etc. Chinese dish names are composed of several types of cultural references, such as ingredients, colors, flavors, culinary techniques, cooking utensils, toponyms, anthroponyms, metaphors, historical tales, among others. These cultural references act as one of the biggest difficulties in translation, in which the use of translation techniques is usually required. Regarding the lack of Chinese food-related translation studies, especially in Chinese-Spanish translation, and the current massive use of MT, the quality of the MT output of Chinese dish names is questioned. Fifty Chinese dish names with different types of cultural components were selected in order to complete this study. First, all of these dish names were translated by three different MT tools (Google Translate, Baidu Translate and Bing Translator). Second, a questionnaire was designed and completed by 12 Chinese online users (Chinese graduates of a Hispanic Philology major) in order to find out user preferences regarding the collected MT output. Finally, human translation techniques were observed and analyzed to identify what translation techniques would be observed more often in the preferred MT proposals. The result reveals that the MT output of the Chinese gastronomy nomenclature is not of high quality. It would be recommended not to trust the MT in occasions like restaurant menus, TV culinary shows, etc. However, the MT output could be used as an aid for tourists to have a general idea of a dish (the main ingredients, for example). Literal translation turned out to be the most observed technique, followed by borrowing, generalization and adaptation, while amplification, particularization and transposition were infrequently observed. Possibly because that the MT engines at present are limited to relate equivalent terms and offer literal translations without taking into account the whole context meaning of the dish name, which is essential to the application of those less observed techniques. This could give insight into the post-editing of the Chinese dish name translation. By observing and analyzing translation techniques in the proposals of the machine translators, the post-editors could better decide which techniques to apply in each case so as to correct mistakes and improve the quality of the translation.Keywords: Chinese dish names, cultural references, machine translation, translation techniques
Procedia PDF Downloads 137657 The Impact of Artificial Intelligence on Medicine Production
Authors: Yasser Ahmed Mahmoud Ali Helal
Abstract:
The use of CAD (Computer Aided Design) technology is ubiquitous in the architecture, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of architecture schools in Nigeria as an important part of the training module. This article examines the ethical issues involved in implementing CAD (Computer Aided Design) content into the architectural education curriculum. Using existing literature, this study begins with the benefits of integrating CAD into architectural education and the responsibilities of different stakeholders in the implementation process. It also examines issues related to the negative use of information technology and the perceived negative impact of CAD use on design creativity. Using a survey method, data from the architecture department of University was collected to serve as a case study on how the issues raised were being addressed. The article draws conclusions on what ensures successful ethical implementation. Millions of people around the world suffer from hepatitis C, one of the world's deadliest diseases. Interferon (IFN) is treatment options for patients with hepatitis C, but these treatments have their side effects. Our research focused on developing an oral small molecule drug that targets hepatitis C virus (HCV) proteins and has fewer side effects. Our current study aims to develop a drug based on a small molecule antiviral drug specific for the hepatitis C virus (HCV). Drug development using laboratory experiments is not only expensive, but also time-consuming to conduct these experiments. Instead, in this in silicon study, we used computational techniques to propose a specific antiviral drug for the protein domains of found in the hepatitis C virus. This study used homology modeling and abs initio modeling to generate the 3D structure of the proteins, then identifying pockets in the proteins. Acceptable lagans for pocket drugs have been developed using the de novo drug design method. Pocket geometry is taken into account when designing ligands. Among the various lagans generated, a new specific for each of the HCV protein domains has been proposed.Keywords: drug design, anti-viral drug, in-silicon drug design, hepatitis C virus (HCV) CAD (Computer Aided Design), CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication
Procedia PDF Downloads 85656 Low Carbon Tourism Management: Strategies for Climate-Friendly Tourism of Koh Mak, Thailand
Authors: Panwad Wongthong, Thanan Apivantanaporn, Sutthiwan Amattayakul
Abstract:
Nature-based tourism is one of the fastest growing industries that can bring in economic benefits, improve quality of life and promote conservation of biodiversity and habitats. As tourism develops, substantial socio-economic and environmental costs become more explicit. Particularly in island destinations, the dynamic system and geographical limitations makes the intensity of tourism development and severity of the negative environmental impacts greater. The current contribution of the tourism sector to global climate change is established at approximately 5% of global anthropogenic CO2 emissions. In all scenarios, tourism is anticipated to grow substantially and to account for an increasingly large share of global greenhouse gas emissions. This has prompted an urgent call for more sustainable alternatives. This study selected a small island of Koh Mak in Thailand as a case study because of its reputation of being laid back, family oriented and rich in biodiversity. Importantly, it is a test platform for low carbon tourism development project supported by the Designated Areas for Sustainable Tourism Administration (DASTA) in collaboration with the Institute for Small and Medium Enterprises Development (ISMED). The study explores strategies for low carbon tourism management and assesses challenges and opportunities for Koh Mak to become a low carbon tourism destination. The goal is to identify suitable management approaches applicable for Koh Mak which may then be adapted to other small islands in Thailand and the region. Interventions/initiatives to increase energy efficiency in hotels and resorts; cut carbon emissions; reduce impacts on the environment; and promote conservation will be analyzed. Ways toward long-term sustainability of climate-friendly tourism will be recommended. Recognizing the importance of multi-stakeholder involvement in the tourism sector, findings from this study can reward Koh Mak tourism industry with a triple-win: cost savings and compliance with higher standards/markets; less waste, air emissions and effluents; and better capabilities of change, motivation of business owners, staff, tourists as well as residents. The consideration of climate change issues in the planning and implementation of tourism development is of great significance to protect the tourism sector from negative impacts.Keywords: climate change, CO2 emissions, low carbon tourism, sustainable tourism management
Procedia PDF Downloads 282655 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II
Authors: Heerak Banerjee, Sourov Roy
Abstract:
Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry
Procedia PDF Downloads 128654 Heuristic Approaches for Injury Reductions by Reduced Car Use in Urban Areas
Authors: Stig H. Jørgensen, Trond Nordfjærn, Øyvind Teige Hedenstrøm, Torbjørn Rundmo
Abstract:
The aim of the paper is to estimate and forecast road traffic injuries in the coming 10-15 years given new targets in urban transport policy and shifts of mode of transport, including injury cross-effects of mode changes. The paper discusses possibilities and limitations in measuring and quantifying possible injury reductions. Injury data (killed and seriously injured road users) from six urban areas in Norway from 1998-2012 (N= 4709 casualties) form the basis for estimates of changing injury patterns. For the coming period calculation of number of injuries and injury rates by type of road user (categories of motorized versus non-motorized) by sex, age and type of road are made. A prognosticated population increase (25 %) in total population within 2025 in the six urban areas will curb the proceeded fall in injury figures. However, policy strategies and measures geared towards a stronger modal shift from use of private vehicles to safer public transport (bus, train) will modify this effect. On the other side will door to door transport (pedestrians on their way to/from public transport nodes) imply a higher exposure for pedestrians (bikers) converting from private vehicle use (including fall accidents not registered as traffic accidents). The overall effect is the sum of these modal shifts in the increasing urban population and in addition diminishing return to the majority of road safety countermeasures has also to be taken into account. The paper demonstrates how uncertainties in the various estimates (prediction factors) on increasing injuries as well as decreasing injury figures may partly offset each other. The paper discusses road safety policy and welfare consequences of transport mode shift, including reduced use of private vehicles, and further environmental impacts. In this regard, safety and environmental issues will as a rule concur. However pursuing environmental goals (e.g. improved air quality, reduced co2 emissions) encouraging more biking may generate more biking injuries. The study was given financial grants from the Norwegian Research Council’s Transport Safety Program.Keywords: road injuries, forecasting, reduced private care use, urban, Norway
Procedia PDF Downloads 238653 Advancing Entrepreneurial Knowledge Through Re-Engineering Social Studies Education
Authors: Chukwuka Justus Iwegbu, Monye Christopher Prayer
Abstract:
Propeller aircraft engines, and more generally engines with a large rotating part (turboprops, high bypass ratio turbojets, etc.) are widely used in the industry and are subject to numerous developments in order to reduce their fuel consumption. In this context, unconventional architectures such as open rotors or distributed propulsion appear, and it is necessary to consider the influence of these systems on the aircraft's stability in flight. Indeed, the tendency to lengthen the blades and wings on which these propulsion devices are fixed increases their flexibility and accentuates the risk of whirl flutter. This phenomenon of aeroelastic instability is due to the precession movement of the axis of rotation of the propeller, which changes the angle of attack of the flow on the blades and creates unsteady aerodynamic forces and moments that can amplify the motion and make it unstable. The whirl flutter instability can ultimately lead to the destruction of the engine. We note the existence of a critical speed of the incident flow. If the flow velocity is lower than this value, the motion is damped and the system is stable, whereas beyond this value, the flow provides energy to the system (negative damping) and the motion becomes unstable. A simple model of whirl flutter is based on the work of Houbolt & Reed who proposed an analytical expression of the aerodynamic load on a rigid blade propeller whose axis orientation suffers small perturbations. Their work considered a propeller subjected to pitch and yaw movements, a flow undisturbed by the blades and a propeller not generating any thrust in the absence of precession. The unsteady aerodynamic forces were then obtained using the thin airfoil theory and the strip theory. In the present study, the unsteady aerodynamic loads are expressed for a general movement of the propeller (not only pitch and yaw). The acceleration and rotation of the flow by the propeller are modeled using a Blade Element Momentum Theory (BEMT) approach, which also enable to take into account the thrust generated by the blades. It appears that the thrust has a stabilizing effect. The aerodynamic model is further developed using Theodorsen theory. A reduced order model of the aerodynamic load is finally constructed in order to perform linear stability analysis.Keywords: advancing, entrepreneurial, knowledge, industralization
Procedia PDF Downloads 99652 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics
Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich
Abstract:
Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes
Procedia PDF Downloads 75651 The Effects of English Contractions on the Application of Syntactic Theories
Authors: Wakkai Hosanna Hussaini
Abstract:
A formal structure of the English clause is composed of at least two elements – subject and verb, in structural grammar and at least one element – predicate, in systemic (functional) and generative grammars. Each of the elements can be represented by a word or group (of words). In modern English structure, very often speakers merge two words as one with the use of an apostrophe. Each of the two words can come from different elements or belong to the same element. In either case, result of the merger is called contraction. Although contractions constitute a part of modern English structure, they are considered informal in nature (more frequently used in spoken than written English) that is why they were initially viewed as constituting an evidence of language deterioration. To our knowledge, no formal syntactic theory yet has been particular on the contractions because of its deviation from the formal rules of syntax that seek to identify the elements that form a clause in English. The inconsistency between the formal rules and a contraction is established when two words representing two elements in a non-contraction are merged as one element to form a contraction. Thus the paper presents the various syntactic issues as effects arising from converting non-contracted to contracted forms. It categorizes English contractions and describes each category according to its syntactic relations (position and relationship) and morphological formation (form and content) as integral part of modern structure of English. This is a position paper as such the methodology is observational, descriptive and explanatory/analytical based on existing related literature. The inventory of English contractions contained in books on syntax forms the data from where specific examples are drawn. It is noted as conclusion that the existing syntactic theories were not originally established to account for English contractions. The paper, when published, will further expose the inadequacies of the existing syntactic theories by giving more reasons for the establishment of a more comprehensive syntactic theory for analyzing English clause/sentence structure involving contractions. The method used reveals the extent of the inadequacies in applying the three major syntactic theories: structural, systemic (functional) and generative, on the English contractions. Although no theory is without scope, shying away from the three major theories from recognizing the English contractions need to be broken because of the increasing popularity of its use in modern English structure. The paper, therefore, recommends that as use of contraction gains more popular even in formal speeches today, there is need to establish a syntactic theory to handle its patterns of syntactic relations and morphological formation.Keywords: application, effects, English contractions, syntactic theories
Procedia PDF Downloads 273650 Necessity of Recognition of Same-Sex Marriages and Civil Partnerships Concluded Abroad from Civil Status Registry Point of View
Authors: Ewa Kamarad
Abstract:
Recent problems with adopting the EU Regulation on matrimonial property regimes have clearly proven that Member States are unable to agree on the scope of the Regulation and, therefore, on the definitions of matrimonial property and marriage itself. Taking into account that the Regulation on the law applicable to divorce and legal separation, as well as the Regulation on matrimonial property regimes, were adopted in the framework of enhanced cooperation, it is evident that lack of a unified definition of marriage has very wide-ranging consequences. The main problem with the unified definition of marriage is that the EU is not entitled to adopt measures in the domain of material family law, as this area remains under the exclusive competence of the Member States. Because of that, the legislation on marriage in domestic legal orders of the various Member States is very different. These differences concern not only issues such as form of marriage or capacity to enter into marriage, but also the most basic matter, namely the core of the institution of marriage itself. Within the 28 Member States, we have those that allow both different-sex and same-sex marriages, those that have adopted special, separate institutions for same-sex couples, and those that allow only marriage between a man and a woman (e.g. Hungary, Latvia, Lithuania, Poland, Slovakia). Because of the freedom of movement within the European Union, it seems necessary to somehow recognize the civil effects of a marriage that was concluded in another Member State. The most crucial issue is how far that recognition should go. The thesis presented in the presentation is that, at an absolute minimum, the authorities of all Member States must recognize the civil status of the persons who enter into marriage in another Member State. Lack of such recognition might cause serious problems, both for the spouses and for other individuals. The authorities of some Member States may treat the marriage as if it does not exist because it was concluded under foreign law that defines marriage differently. Because of that, it is possible for the spouse to obtain a certificate of civil status stating that he or she is single and thus eligible to enter into marriage – despite being legally married under the law of another Member State. Such certificate can then be used in another country to serve as a proof of civil status. Eventually the lack of recognition can lead to so-called “international bigamy”. The biggest obstacle to recognition of marriages concluded under the law of another Member State that defines marriage differently is the impossibility of transcription of a foreign civil certificate in the case of such a marriage. That is caused by the rule requiring that a civil certificate issued (or transcribed) under one country's law can contain only records of legal institutions recognized by that country's legal order. The presentation is going to provide possible solutions to this problem.Keywords: civil status, recognition of marriage, conflict of laws, private international law
Procedia PDF Downloads 237649 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning
Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga
Abstract:
Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter
Procedia PDF Downloads 212648 Quantum Chemical Investigation of Hydrogen Isotopes Adsorption on Metal Ion Functionalized Linde Type A and Faujasite Type Zeolites
Authors: Gayathri Devi V, Aravamudan Kannan, Amit Sircar
Abstract:
In the inner fuel cycle system of a nuclear fusion reactor, the Hydrogen Isotopes Removal System (HIRS) plays a pivoted role. It enables the effective extraction of the hydrogen isotopes from the breeder purge gas which helps to maintain the tritium breeding ratio and sustain the fusion reaction. One of the components of HIRS, Cryogenic Molecular Sieve Bed (CMSB) columns with zeolites adsorbents are considered for the physisorption of hydrogen isotopes at 1 bar and 77 K. Even though zeolites have good thermal stability and reduced activation properties making them ideal for use in nuclear reactor applications, their modest capacity for hydrogen isotopes adsorption is a cause of concern. In order to enhance the adsorbent capacity in an informed manner, it is helpful to understand the adsorption phenomena at the quantum electronic structure level. Physicochemical modifications of the adsorbent material enhances the adsorption capacity through the incorporation of active sites. This may be accomplished through the incorporation of suitable metal ions in the zeolite framework. In this work, molecular hydrogen isotopes adsorption on the active sites of functionalized zeolites are investigated in detail using Density Functional Theory (DFT) study. This involves the utilization of hybrid Generalized Gradient Approximation (GGA) with dispersion correction to account for the exchange and correlation functional of DFT. The electronic energies, adsorption enthalpy, adsorption free energy, Highest Occupied Molecular Orbital (HOMO), Lowest Unoccupied Molecular Orbital (LUMO) energies are computed on the stable 8T zeolite clusters as well as the periodic structure functionalized with different active sites. The characteristics of the dihydrogen bond with the active metal sites and the isotopic effects are also studied in detail. Validation studies with DFT will also be presented for adsorption of hydrogen on metal ion functionalized zeolites. The ab-inito screening analysis gave insights regarding the mechanism of hydrogen interaction with the zeolites under study and also the effect of the metal ion on adsorption. This detailed study provides guidelines for selection of the appropriate metal ions that may be incorporated in the zeolites framework for effective adsorption of hydrogen isotopes in the HIRS.Keywords: adsorption enthalpy, functionalized zeolites, hydrogen isotopes, nuclear fusion, physisorption
Procedia PDF Downloads 181647 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 268646 The Structure of Financial Regulation: The Regulators Perspective
Authors: Mohamed Aljarallah, Mohamed Nurullah, George Saridakis
Abstract:
This paper aims and objectives are to investigate how the structural change of the financial regulatory bodies affect the financial supervision and how the regulators can design such a structure with taking into account; the Central Bank, the conduct of business and the prudential regulators, it will also consider looking at the structure of the international regulatory bodies and what barriers are found. There will be five questions to be answered; should conduct of business and prudential regulation be separated? Should the financial supervision and financial stability be separated? Should the financial supervision be under the Central Bank? To what extent the politician should intervene in changing the regulatory and supervisory structure? What should be the regulatory and supervisory structure when there is financial conglomerate? Semi structure interview design will be applied. This research sample selection contains a collective of financial regulators and supervisors from the emerged and emerging countries. Moreover, financial regulators and supervisors must be at a senior level at their organisations. Additionally, senior financial regulators and supervisors would come from different authorities and from around the world. For instance, one of the participants comes from the International Bank Settlements, others come from European Central Bank, and an additional one will come from Hong Kong Monetary Authority and others. Such a variety aims to fulfil the aims and objectives of the research and cover the research questions. The analysis process starts with transcription of the interview, using Nvivo software for coding, applying thematic interview to generate the main themes. The major findings of the study are as follow. First, organisational structure changes quite frequently if the mandates are not clear. Second, measuring structural change is difficult, which makes the whole process unclear. Third, effective coordination and communication are what regulators looking for when they change the structure and that requires; openness, trust, and incentive. In addition to that, issues appear during the event of crisis tend to be the reason why the structure change. Also, the development of the market sometime causes a change in the regulatory structure. And, some structural change occurs simply because of the international trend, fashion, or other countries' experiences. Furthermore, when the top management change the structure tends to change. Moreover, the structure change due to the political change, or politicians try to show they are doing something. Finally, fear of being blamed can be a driver of structural change. In conclusion, this research aims to provide an insight from the senior regulators and supervisors from fifty different countries to have a clear understanding of why the regulatory structure keeps changing from time to time through a qualitative approach, namely, semi-structure interview.Keywords: financial regulation bodies, financial regulatory structure, global financial regulation, financial crisis
Procedia PDF Downloads 145645 Climate Related Variability and Stock-Recruitment Relationship of the North Pacific Albacore Tuna
Authors: Ashneel Ajay Singh, Naoki Suzuki, Kazumi Sakuramoto,
Abstract:
The North Pacific albacore (Thunnus alalunga) is a temperate tuna species distributed in the North Pacific which is of significant economic importance to the Pacific Island Nations and Territories. Despite its importance, the stock dynamics and ecological characteristics of albacore still, have gaps in knowledge. The stock-recruitment relationship of the North Pacific stock of albacore tuna was investigated for different density-dependent effects and a regime shift in the stock characteristics in response to changes in environmental and climatic conditions. Linear regression analysis for recruit per spawning biomass (RPS) and recruitment (R) against the female spawning stock biomass (SSB) were significant for the presence of different density-dependent effects and positive for a regime shift in the stock time series. Application of Deming regression to RPS against SSB with the assumption for the presence of observation and process errors in both the dependent and independent variables confirmed the results of simple regression. However, R against SSB results disagreed given variance level of < 3 and agreed with linear regression results given the assumption of variance ≥ 3. Assuming the presence of different density-dependent effects in the albacore tuna time series, environmental and climatic condition variables were compared with R, RPS, and SSB. The significant relationship of R, RPS and SSB were determined with the sea surface temperature (SST), Pacific Decadal Oscillation (PDO) and multivariate El Niño Southern Oscillation (ENSO) with SST being the principal variable exhibiting significantly similar trend with R and RPS. Recruitment is significantly influenced by the dynamics of the SSB as well as environmental conditions which demonstrates that the stock-recruitment relationship is multidimensional. Further investigation of the North Pacific albacore tuna age-class and structure is necessary for further support the results presented here. It is important for fishery managers and decision makers to be vigilant of regime shifts in environmental conditions relating to albacore tuna as it may possibly cause regime shifts in the albacore R and RPS which should be taken into account to effectively and sustainability formulate harvesting plans and management of the species in the North Pacific oceanic region.Keywords: Albacore tuna, Thunnus alalunga, recruitment, spawning stock biomass, recruits per spawning biomass, sea surface temperature, pacific decadal oscillation, El Niño southern oscillation, density-dependent effects, regime shift
Procedia PDF Downloads 307644 Maturity Level of Knowledge Management in Whole Life Costing in the UK Construction Industry: An Empirical Study
Authors: Ndibarefinia Tobin
Abstract:
The UK construction industry has been under pressure for many years to produce economical buildings which offer value for money, not only during the construction phase, but more importantly, during the full life of the building. Whole life costing is considered as an economic analysis tool that takes into account the total investment cost in and ownership, operation and subsequent disposal of a product or system to which the whole life costing method is being applied. In spite of its importance, the practice is still crippled by the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice i.e. the lack of professionals with the knowledge and training on the use of the practice in construction project, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. The aforementioned problems has forced many construction organisations to adopt project enhancement initiatives to boost their performance on the use of whole life costing techniques so as to produce economical buildings which offer value for money during the construction stage also the whole life of the building/asset. The management of knowledge in whole life costing is considered as one of the many project enhancement initiative and it is becoming imperative in the performance and sustainability of an organisation. Procuring building projects using whole life costing technique is heavily reliant on the knowledge, experience, ideas and skills of workers, which comes from many sources including other individuals, electronic media and documents. Due to the diversity of knowledge, capabilities and skills of employees that vary across an organisation, it is significant that they are directed and coordinated efficiently so as to capture, retrieve and share knowledge in order to improve the performance of the organisation. The implementation of knowledge management concept has different levels in each organisation. Measuring the maturity level of knowledge management in whole life costing practice will paint a comprehensible picture of how knowledge is managed in construction organisations. Purpose: The purpose of this study is to identify knowledge management maturity in UK construction organisations adopting whole life costing in construction project. Design/methodology/approach: This study adopted a survey method and conducted by distributing questionnaires to large construction companies that implement knowledge management activities in whole life costing practice in construction project. Four level of knowledge management maturity was proposed on this study. Findings: From the results obtained in the study shows that 34 contractors at the practiced level, 26 contractors at managed level and 12 contractors at continuously improved level.Keywords: knowledge management, whole life costing, construction industry, knowledge
Procedia PDF Downloads 244643 Relatively High Heart-Rate Variability Predicts Greater Survival Chances in Patients with Covid-19
Authors: Yori Gidron, Maartje Mol, Norbert Foudraine, Frits Van Osch, Joop Van Den Bergh, Moshe Farchi, Maud Straus
Abstract:
Background: The worldwide pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-COV2), which began in 2019, also known as Covid-19, has infected over 136 million people and tragically took the lives of over 2.9 million people worldwide. Many of the complications and deaths are predicted by the inflammatory “cytokine storm.” One way to progress in the prevention of death is by finding a predictive and protective factor that inhibits inflammation, on the one hand, and which also increases anti-viral immunity on the other hand. The vagal nerve does precisely both actions. This study examined whether vagal nerve activity, indexed by heart-rate variability (HRV), predicts survival in patients with Covid-19. Method: We performed a pseudo-prospective study, where we retroactively obtained ECGs of 271 Covid-19 patients arriving at a large regional hospital in The Netherlands. HRV was indexed by the standard deviation of the intervals between normal heartbeats (SDNN). We examined patients’ survival at 3 weeks and took into account multiple confounders and known prognostic factors (e.g., age, heart disease, diabetes, hypertension). Results: Patients’ mean age was 68 (range: 25-95) and nearly 22% of the patients had died by 3 weeks. Their mean SDNN (17.47msec) was far below the norm (50msec). Importantly, relatively higher HRV significantly predicted a higher chance of survival, after statistically controlling for patients’ age, cardiac diseases, hypertension and diabetes (relative risk, H.R, and 95% confidence interval (95%CI): H.R = 0.49, 95%CI: 0.26 – 0.95, p < 0.05). However, since HRV declines rapidly with age and since age is a profound predictor in Covid-19, we split the sample by median age (40). Subsequently, we found that higher HRV significantly predicted greater survival in patients older than 70 (H.R = 0.35, 95%CI: 0.16 – 0.78, p = 0.01), but HRV did not predict survival in patients below age 70 years (H.R = 1.11, 95%CI: 0.37 – 3.28, p > 0.05). Conclusions: To the best of our knowledge, this is the first study showing that higher vagal nerve activity, as indexed by HRV, is an independent predictor of higher chances for survival in Covid-19. The results are in line with the protective role of the vagal nerve in diseases and extend this to a severe infectious illness. Studies should replicate these findings and then test in controlled trials whether activating the vagus nerve may prevent mortality in Covid-19.Keywords: Covid-19, heart-rate Variability, prognosis, survival, vagal nerve
Procedia PDF Downloads 175642 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications
Authors: Alissa Lefebre
Abstract:
It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.Keywords: divisional applications, regulatory changes, strategic patenting, EPO
Procedia PDF Downloads 130641 Identifying and Understand Pragmatic Failures in Portuguese Foreign Language by Chinese Learners in Macau
Authors: Carla Lopes
Abstract:
It is clear nowadays that the proper performance of different speech acts is one of the most difficult obstacles that a foreign language learner has to overcome to be considered communicatively competent. This communication presents the results of an investigation on the pragmatic performance of Portuguese Language students at the University of Macau. The research discussed herein is based on a survey consisting of fourteen speaking situations to which the participants must respond in writing, and that includes different types of speech acts: apology, response to a compliment, refusal, complaint, disagreement and the understanding of the illocutionary force of indirect speech acts. The responses were classified in a five levels Likert scale (quantified from 1 to 5) according to their suitability for the particular situation. In general terms, we can summarize that about 45% of the respondents' answers were pragmatically competent, 10 % were acceptable and 45 % showed weaknesses at socio-pragmatic competence level. Given that the linguistic deviations were not taken into account, we can conclude that the faults are of cultural origin. It is natural that in the presence of orthogonal cultures, such as Chinese and Portuguese, there are failures of this type, barely solved in the four years of the undergraduate program. The target population, native speakers of Cantonese or Mandarin, make their first contact with the English language before joining the Bachelor of Portuguese Language. An analysis of the socio - pragmatic failures in the respondents’ answers suggests the conclusion that many of them are due to the lack of cultural knowledge. They try to compensate for this either using their native culture or resorting to a Western culture that they consider close to the Portuguese, that is the English or US culture, previously studied, and also widely present in the media and on the internet. This phenomenon, known as 'pragmatic transfer', can result in a linguistic behavior that may be considered inauthentic or pragmatically awkward. The resulting speech act is grammatically correct but is not pragmatically feasible, since it is not suitable to the culture of the target language, either because it does not exist or because the conditions of its use are in fact different. Analysis of the responses also supports the conclusion that these students present large deviations from the expected and stereotyped behavior of Chinese students. We can speculate while this linguistic behavior is the consequence of the Macao globalization that culturally casts the students, makes them more open, and distinguishes them from the typical Chinese students.Keywords: Portuguese foreign language, pragmatic failures, pragmatic transfer, pragmatic competence
Procedia PDF Downloads 212640 Full Mini Nutritional Assessment Questionnaire and the Risk of Malnutrition and Mortality in Elderly, Hospitalized Patients: A Cross-Sectional Study
Authors: Christos E. Lampropoulos, Maria Konsta, Tamta Sirbilatze, Ifigenia Apostolou, Vicky Dradaki, Konstantina Panouria, Irini Dri, Christina Kordali, Vaggelis Lambas, Georgios Mavras
Abstract:
Objectives: Full Mini Nutritional Assessment (MNA) questionnaire is one of the most useful tools in diagnosis of malnutrition in hospitalized patients, which is related to increased morbidity and mortality. The purpose of our study was to assess the nutritional status of elderly, hospitalized patients and examine the hypothesis that MNA may predict mortality and extension of hospitalization. Methods: One hundred fifty patients (78 men, 72 women, mean age 80±8.2) were included in this cross-sectional study. The following data were taken into account in analysis: anthropometric and laboratory data, physical activity (International Physical Activity Questionnaires, IPAQ), smoking status, dietary habits, cause and duration of current admission, medical history (co-morbidities, previous admissions). Primary endpoints were mortality (from admission until 6 months afterwards) and duration of admission. The latter was compared to national guidelines for closed consolidated medical expenses. Logistic regression and linear regression analysis were performed in order to identify independent predictors for mortality and extended hospitalization respectively. Results: According to MNA, nutrition was normal in 54/150 (36%) of patients, 46/150 (30.7%) of them were at risk of malnutrition and the rest 50/150 (33.3%) were malnourished. After performing multivariate logistic regression analysis we found that the odds of death decreased 20% per each unit increase of full MNA score (OR=0.8, 95% CI 0.74-0.89, p < 0.0001). Patients who admitted due to cancer were 23 times more likely to die, compared to those with infection (OR=23, 95% CI 3.8-141.6, p=0.001). Similarly, patients who admitted due to stroke were 7 times more likely to die (OR=7, 95% CI 1.4-34.5, p=0.02), while these with all other causes of admission were less likely (OR=0.2, 95% CI 0.06-0.8, p=0.03), compared to patients with infection. According to multivariate linear regression analysis, each increase of unit of full MNA, decreased the admission duration on average 0.3 days (b:-0.3, 95% CI -0.45 - -0.15, p < 0.0001). Patients admitted due to cancer had on average 6.8 days higher extension of hospitalization, compared to those admitted for infection (b:6.8, 95% CI 3.2-10.3, p < 0.0001). Conclusion: Mortality and extension of hospitalization is significantly increased in elderly, malnourished patients. Full MNA score is a useful diagnostic tool of malnutrition.Keywords: duration of admission, malnutrition, mini nutritional assessment score, prognostic factors for mortality
Procedia PDF Downloads 313639 The Effect of Filter Design and Face Velocity on Air Filter Performance
Authors: Iyad Al-Attar
Abstract:
Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop
Procedia PDF Downloads 135638 Occasional Word-Formation in Postfeminist Fiction: Cognitive Approach
Authors: Kateryna Nykytchenko
Abstract:
Modern fiction and non-fiction writers commonly use their own lexical and stylistic devices to capture a reader’s attention and bring certain thoughts and feelings to his reader. Among such devices is the appearance of one of the neologic notions – individual author’s formations: occasionalisms or nonce words. To a significant extent, the host of examples of new words occurs in chick lit genre which has experienced exponential growth in recent years. Chick Lit is a new-millennial postfeminist fiction which focuses primarily on twenty- to thirtysomething middle-class women. It brings into focus the image of 'a new woman' of the 21st century who is always fallible, funny. This paper aims to investigate different types of occasional word-formation which reflect cognitive mechanisms of conveying women’s perception of the world. Chick lit novels of Irish author Marian Keyes present genuinely innovative mixture of forms, both literary and nonliterary which is displayed in different types of occasional word-formation processes such as blending, compounding, creative respelling, etc. Crossing existing mental and linguistic boundaries, adopting herself to new and overlapping linguistic spaces, chick lit author creates new words which demonstrate the result of development and progress of language and the relationship between language, thought and new reality, ultimately resulting in hybrid word-formation (e.g. affixation or pseudoborrowing). Moreover, this article attempts to present the main characteristics of chick-lit fiction genre with the help of the Marian Keyes’s novels and their influence on occasionalisms. There has been a lack of research concerning cognitive nature of occasionalisms. The current paper intends to account for occasional word-formation as a set of interconnected cognitive mechanisms, operations and procedures meld together to create a new word. The results of the generalized analysis solidify arguments that the kind of new knowledge an occasionalism manifests is inextricably linked with cognitive procedure underlying it, which results in corresponding type of word-formation processes. In addition, the findings of the study reveal that the necessity of creating occasionalisms in postmodern fiction novels arises from the need to write in a new way keeping up with a perpetually developing world, and thus the evolution of the speaker herself and her perception of the world.Keywords: Chick Lit, occasionalism, occasional word-formation, cognitive linguistics
Procedia PDF Downloads 182637 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 377636 Standardizing and Achieving Protocol Objectives for ChestWall Radiotherapy Treatment Planning Process using an O-ring Linac in High-, Low- and Middle-income Countries
Authors: Milton Ixquiac, Erick Montenegro, Francisco Reynoso, Matthew Schmidt, Thomas Mazur, Tianyu Zhao, Hiram Gay, Geoffrey Hugo, Lauren Henke, Jeff Michael Michalski, Angel Velarde, Vicky de Falla, Franky Reyes, Osmar Hernandez, Edgar Aparicio Ruiz, Baozhou Sun
Abstract:
Purpose: Radiotherapy departments in low- and middle-income countries (LMICs) like Guatemala have recently introduced intensity-modulated radiotherapy (IMRT). IMRT has become the standard of care in high-income countries (HIC) due to reduced toxicity and improved outcomes in some cancers. The purpose of this work is to show the agreement between the dosimetric results shown in the Dose Volume Histograms (DVH) to the objectives proposed in the adopted protocol. This is the initial experience with an O-ring Linac. Methods and Materials: An O-Linac Linac was installed at our clinic in Guatemala in 2019 and has been used to treat approximately 90 patients daily with IMRT. This Linac is a completely Image Guided Device since to deliver each radiotherapy session must take a Mega Voltage Cone Beam Computerized Tomography (MVCBCT). In each MVCBCT, the Linac deliver 9 UM, and they are taken into account while performing the planning. To start the standardization, the TG263 was employed in the nomenclature and adopted a hypofractionated protocol to treat ChestWall, including supraclavicular nodes achieving 40.05Gy in 15 fractions. The planning was developed using 4 semiarcs from 179-305 degrees. The planner must create optimization volumes for targets and Organs at Risk (OARs); the difficulty for the planner was the dose base due to the MVCBCT. To evaluate the planning modality, we used 30 chestwall cases. Results: The plans created manually achieve the protocol objectives. The protocol objectives are the same as the RTOG1005, and the DHV curves look clinically acceptable. Conclusions: Despite the O-ring Linac doesn´t have the capacity to obtain kv images, the cone beam CT was created using MV energy, the dose delivered by the daily image setup process still without affect the dosimetric quality of the plans, and the dose distribution is acceptable achieving the protocol objectives.Keywords: hypofrationation, VMAT, chestwall, radiotherapy planning
Procedia PDF Downloads 119635 From Abraham to Average Man: Game Theoretic Analysis of Divine Social Relationships
Authors: Elizabeth Latham
Abstract:
Billions of people worldwide profess some feeling of psychological or spiritual connection with the divine. The majority of them attribute this personal connection to the God of the Christian Bible. The objective of this research was to discover what could be known about the exact social nature of these relationships and to see if they mimic the interactions recounted in the bible; if a worldwide majority believes that the Christian Bible is a true account of God’s interactions with mankind, it is reasonable to assume that the interactions between God and the aforementioned people would be similar to the ones in the bible. This analysis required the employment of an unusual method of biblical analysis: Game Theory. Because the research focused on documented social interaction between God and man in scripture, it was important to go beyond text-analysis methods. We used stories from the New Revised Standard Version of the bible to set up “games” using economics-style matrices featuring each player’s motivations and possible courses of action, modeled after interactions in the Old and New Testaments between the Judeo-Christian God and some mortal person. We examined all relevant interactions for the objectives held by each party and their strategies for obtaining them. These findings were then compared to similar “games” created based on interviews with people subscribing to different levels of Christianity who ranged from barely-practicing to clergymen. The range was broad so as to look for a correlation between scriptural knowledge and game-similarity to the bible. Each interview described a personal experience someone believed they had with God and matrices were developed to describe each one as social interaction: a “game” to be analyzed quantitively. The data showed that in most cases, the social features of God-man interactions in the modern lives of people were like those present in the “games” between God and man in the bible. This similarity was referred to in the study as “biblical faith” and it alone was a fascinating finding with many implications. The even more notable finding, however, was that the amount of game-similarity present did not correlate with the amount of scriptural knowledge. Each participant was also surveyed on family background, political stances, general education, scriptural knowledge, and those who had biblical faith were not necessarily the ones that knew the bible best. Instead, there was a high degree of correlation between biblical faith and family religious observance. It seems that to have a biblical psychological relationship with God, it is more important to have a religious family than to have studied scripture, a surprising insight with massive implications on the practice and preservation of religion.Keywords: bible, Christianity, game theory, social psychology
Procedia PDF Downloads 157634 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 248633 Evaluation of Air Movement, Humidity and Temperature Perceptions with the Occupant Satisfaction in Office Buildings in Hot and Humid Climate Regions by Means of Field Surveys
Authors: Diego S. Caetano, Doreen E. Kalz, Louise L. B. Lomardo, Luiz P. Rosa
Abstract:
The energy consumption in non-residential buildings in Brazil has a great impact on the national infrastructure. The growth of the energy consumption has a special role over the building cooling systems, supported by the increased people's requirements on hygrothermal comfort. This paper presents how the occupants of office buildings notice and evaluate the hygrothermic comfort regarding temperature, humidity, and air movement, considering the cooling systems presented at the buildings studied, analyzed by real occupants in areas of hot and humid climate. The paper presents results collected over a long time from 3 office buildings in the cities of Rio de Janeiro and Niteroi (Brazil) in 2015 and 2016, from daily questionnaires with eight questions answered by 114 people between 3 to 5 weeks per building, twice a day (10 a.m. and 3 p.m.). The paper analyses 6 out of 8 questions, emphasizing on the perception of temperature, humidity, and air movement. Statistics analyses were made crossing participant answers and humidity and temperature data related to time high time resolution time. Analyses were made from regressions comparing: internal and external temperature, and then compared with the answers of the participants. The results were put in graphics combining statistic graphics related to temperature and air humidity with the answers of the real occupants. Analysis related to the perception of the participants to humidity and air movements were also analyzed. The hygrothermal comfort statistic model of the European standard DIN EN 15251 and that from the Brazilian standard NBR 16401 were compared taking into account the perceptions of the hygrothermal comfort of the participants, with emphasis on air humidity, taking basis on prior studies published on this same research. The studies point out a relative tolerance for higher temperatures than the ones determined by the standards, besides a variation on the participants' perception concerning air humidity. The paper presents a group of detailed information that permits to improve the quality of the buildings based on the perception of occupants of the office buildings, contributing to the energy reduction without health damages and demands of necessary hygrothermal comfort, reducing the consumption of electricity on cooling.Keywords: thermal comfort, energy consumption, energy standards, comfort models
Procedia PDF Downloads 323632 The importance of Clinical Pharmacy and Computer Aided Drug Design
Authors: Peter Edwar Mortada Nasif
Abstract:
The use of CAD (Computer Aided Design) technology is ubiquitous in the architecture, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of architecture schools in Nigeria as an important part of the training module. This article examines the ethical issues involved in implementing CAD (Computer Aided Design) content into the architectural education curriculum. Using existing literature, this study begins with the benefits of integrating CAD into architectural education and the responsibilities of different stakeholders in the implementation process. It also examines issues related to the negative use of information technology and the perceived negative impact of CAD use on design creativity. Using a survey method, data from the architecture department of Chukwuemeka Odumegwu Ojukwu Uli University was collected to serve as a case study on how the issues raised were being addressed. The article draws conclusions on what ensures successful ethical implementation. Millions of people around the world suffer from hepatitis C, one of the world's deadliest diseases. Interferon (IFN) is treatment options for patients with hepatitis C, but these treatments have their side effects. Our research focused on developing an oral small molecule drug that targets hepatitis C virus (HCV) proteins and has fewer side effects. Our current study aims to develop a drug based on a small molecule antiviral drug specific for the hepatitis C virus (HCV). Drug development using laboratory experiments is not only expensive, but also time-consuming to conduct these experiments. Instead, in this in silicon study, we used computational techniques to propose a specific antiviral drug for the protein domains of found in the hepatitis C virus. This study used homology modeling and abs initio modeling to generate the 3D structure of the proteins, then identifying pockets in the proteins. Acceptable lagans for pocket drugs have been developed using the de novo drug design method. Pocket geometry is taken into account when designing ligands. Among the various lagans generated, a new specific for each of the HCV protein domains has been proposed.Keywords: drug design, anti-viral drug, in-silicon drug design, hepatitis C virus, computer aided design, CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication
Procedia PDF Downloads 26631 Using Hemicellulosic Liquor from Sugarcane Bagasse to Produce Second Generation Lactic Acid
Authors: Regiane A. Oliveira, Carlos E. Vaz Rossell, Rubens Maciel Filho
Abstract:
Lactic acid, besides a valuable chemical may be considered a platform for other chemicals. In fact, the feasibility of hemicellulosic sugars as feedstock for lactic acid production process, may represent the drop of some of the barriers for the second generation bioproducts, especially bearing in mind the 5-carbon sugars from the pre-treatment of sugarcane bagasse. Bearing this in mind, the purpose of this study was to use the hemicellulosic liquor from sugarcane bagasse as a substrate to produce lactic acid by fermentation. To release of sugars from hemicellulose it was made a pre-treatment with a diluted sulfuric acid in order to obtain a xylose's rich liquor with low concentration of inhibiting compounds for fermentation (≈ 67% of xylose, ≈ 21% of glucose, ≈ 10% of cellobiose and arabinose, and around 1% of inhibiting compounds as furfural, hydroxymethilfurfural and acetic acid). The hemicellulosic sugars associated with 20 g/L of yeast extract were used in a fermentation process with Lactobacillus plantarum to produce lactic acid. The fermentation process pH was controlled with automatic injection of Ca(OH)2 to keep pH at 6.00. The lactic acid concentration remained stable from the time when the glucose was depleted (48 hours of fermentation), with no further production. While lactic acid is produced occurs the concomitant consumption of xylose and glucose. The yield of fermentation was 0.933 g lactic acid /g sugars. Besides, it was not detected the presence of by-products, what allows considering that the microorganism uses a homolactic fermentation to produce its own energy using pentose-phosphate pathway. Through facultative heterofermentative metabolism the bacteria consume pentose, as is the case of L. plantarum, but the energy efficiency for the cell is lower than during the hexose consumption. This implies both in a slower cell growth, as in a reduction in lactic acid productivity compared with the use of hexose. Also, L. plantarum had shown to have a capacity for lactic acid production from hemicellulosic hydrolysate without detoxification, which is very attractive in terms of robustness for an industrial process. Xylose from hydrolyzed bagasse and without detoxification is consumed, although the hydrolyzed bagasse inhibitors (especially aromatic inhibitors) affect productivity and yield of lactic acid. The use of sugars and the lack of need for detoxification of the C5 liquor from sugarcane bagasse hydrolyzed is a crucial factor for the economic viability of second generation processes. Taking this information into account, the production of second generation lactic acid using sugars from hemicellulose appears to be a good alternative to the complete utilization of sugarcane plant, directing molasses and cellulosic carbohydrates to produce 2G-ethanol, and hemicellulosic carbohydrates to produce 2G-lactic acid.Keywords: fermentation, lactic acid, hemicellulosic sugars, sugarcane
Procedia PDF Downloads 374630 Reconceptualizing “Best Practices” in Public Sector
Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani
Abstract:
Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.Keywords: benchmarking, action research, critical realism, best practices, public sector
Procedia PDF Downloads 129