Search results for: finite element modelling
377 Melaninic Discrimination among Primary School Children
Authors: Margherita Cardellini
Abstract:
To our knowledge, dark skinned children are often victims of discrimination from adults and society, but few studies specifically focus on skin color discrimination on children coming from the same children. Even today, the 'color blind children' ideology is widespread among adults, teachers, and educators and maybe also among scholars, which seem really careful about study expressions of racism in childhood. This social and cultural belief let people think that all the children, because of their age and their brief experience in the world, are disinterested in skin color. Sometimes adults think that children are even incapable of perceiving skin colors and that it could be dangerous to talk about melaninic differences with them because they finally could notice this difference, producing prejudices and racism. Psychology and neurology research projects are showing for many years that even the newborns are already capable of perceiving skin color and ethnic differences by the age of 3 months. Starting from this theoretical framework we conducted a research project to understand if and how primary school children talk about skin colors, picking up any stereotypes or prejudices. Choosing to use the focus group as a methodology to stimulate the group dimension and interaction, several stories about skin color discrimination's episodes within their classroom or school have emerged. Using the photo elicitation technique we chose to stimulate talk about the research object, which is the skin color, asking the children what was ‘the first two things that come into your mind’ when they look the photographs presented during the focus group, which represented dark and light skinned women and men. So, this paper will present some of these stories about episodes of discrimination with an escalation grade of proximity related to the discriminatory act. It will be presented a story of discrimination happened within the school, in an after-school daycare, in the classroom and even episode of discrimination that children tell during the focus groups in the presence of the discriminated child. If it is true that the Declaration of the Right of the Child state that every child should be discrimination free, it’s also true that every adult should protect children from every form of discrimination. How, as adults, can we defend children against discrimination if we cannot admit that even children are potential discrimination’s actors? Without awareness, we risk to devalue these episodes, implicitly confident that the only way to fight against discrimination is to keep her quiet. The right not to be discriminated goes through the right to talk about its own experiences of discrimination and the right to perceive the unfairness of the constant depreciation about skin color or any element of physical diversity. Intercultural education could act as spokesperson for this mission in the belief that difference and plurality could really become elements of potential enrichment for humanity, starting from children.Keywords: colorism, experiences of discrimination, primary school children, skin color discrimination
Procedia PDF Downloads 196376 Reviewers’ Perception of the Studio Jury System: How They View its Value in Architecture and Design Education
Authors: Diane M. Bender
Abstract:
In architecture and design education, students learn and understand their discipline through lecture courses and within studios. A studio is where the instructor works closely with students to help them understand design by doing design work. The final jury is the culmination of the studio learning experience. It’s value and significance are rarely questioned. Students present their work before their peers, instructors, and invited reviewers, known as jurors. These jurors are recognized experts who add a breadth of feedback to students mostly in the form of a verbal critique of the work. Since the design review or jury has been a common element of studio education for centuries, jurors themselves have been instructed in this format. Therefore, they understand its value from both a student and a juror perspective. To better understand how these reviewers see the value of a studio review, a survey was distributed to reviewers at a multi-disciplinary design school within the United States. Five design disciplines were involved in this case study: architecture, graphic design, industrial design, interior design, and landscape architecture. Respondents (n=108) provided written comments about their perceived value of the studio review system. The average respondent was male (64%), between 40-49 years of age, and has attained a master’s degree. Qualitative analysis with thematic coding revealed several themes. Reviewers view the final jury as important because it provides a variety of perspectives from unbiased external practitioners and prepares students for similar presentation challenges they will experience in professional practice. They also see it as a way to validate the assessment and evaluation of students by faculty. In addition, they see a personal benefit for themselves and their firm – the ability to network with fellow jurors, professors, and students (i.e., future colleagues). Respondents also provided additional feedback about the jury system and studio education in general. Typical responses included a desire for earlier engagement with students; a better explanation from the instructor about the project parameters, rubrics/grading, and guidelines for juror involvement; a way to balance giving encouraging feedback versus overly critical comments; and providing training for jurors prior to reviews. While this study focused on the studio review, the findings are equally applicable to other disciplines. Suggestions will be provided on how to improve the preparation of guests in the learning process and how their interaction can positively influence student engagement.Keywords: assessment, design, jury, studio
Procedia PDF Downloads 65375 On the Right an Effective Administrative Justice in the Republic of Macedonia: Challenges and Problems
Authors: Arlinda Memetaj
Abstract:
A sound system of administrative justice represents a vital element of democratic governance. The proper control of public administration consists not only of a sound civil service framework and legislative oversight, but empowerment of the public and courts to hold public officials accountable for their decision-making through the application of fair administrative procedural rules and the use of appropriate administrative appeals processes and judicial review. The establishment of effective public administration, has been since 1990s among the most 'important and urgent' final strategic objectives of the Republic of Macedonia. To this aim the country has so far adopted a huge series of legislative and strategic documents related to any aspects of the administrative justice system. The latter is designed to strengthen the legal position of citizens, businesses, civic organizations, and other societal subjects. 'Changes and reforms' in this field have been thus the most frequent terms being used in the country for the last more than 20 years. Several years ago the County established Administrative Courts, while permanently amending the Law on the General Administrative procedure (LGAP). The new LGAP was adopted in 2015 and it introduced considerable innovations concerned. The most recent inputs in this regard includes the National Public Administration Reform Strategy 2017 – 2022, one of the key expected result of which includes both providing effective protection of the citizens` rights. In doing the aforesaid however there is still a series of interrelated shortcomings in this regard, such as (just to mention few) the complex appeal procedure, delays in enforcing court rulings, etc. Against the above background, the paper firstly describes the Macedonian institutional and legislative framework in the above field, and then illustrates the shortcomings therein. It finally claims that the current status quo situation may be overcome only if there is a proper implementation of the administrative courts decisions and far stricter international monitoring process thereof. A new approach and strong political commitment from the highest political leadership is thus absolutely needed to ensure the principles of transparency, accountability and merit in public administration. The main method used in this paper is the descriptive, analytical and comparative one due to the very character of the paper itself.Keywords: administrative justice, administrative procedure, administrative courts/disputes, European Human Rights Court, human rights, monitoring, reform, benefit.
Procedia PDF Downloads 156374 Developmental Relationships between Alcohol Problems and Internalising Symptoms in a Longitudinal Sample of College Students
Authors: Lina E. Homman, Alexis C. Edwards, Seung Bin Cho, Danielle M. Dick, Kenneth S. Kendler
Abstract:
Research supports an association between alcohol problems and internalising symptoms, but the understanding of how the two phenotypes relate to each other is poor. It has been hypothesized that the relationship between the phenotypes is causal; however investigations in regards to direction are inconsistent. Clarity of the relationship between the two phenotypes may be provided by investigating the phenotypes developmental inter-relationships longitudinally. The objective of the study was to investigate a) changes in alcohol problems and internalising symptoms in college students across time and b) the direction of effect of growth between alcohol problems and internalising symptoms from late adolescent to emerging adulthood c) possible gender differences. The present study adds to the knowledge of comorbidity of alcohol problems and internalising symptoms by examining a longitudinal sample of college students and by examining the simultaneous development of the symptoms. A sample of college students is of particular interest as symptoms of both phenotypes often have their onset around this age. A longitudinal sample of college students from a large, urban, public university in the United States was used. Data was collected over a time period of 2 years at 3 time points. Latent growth models were applied to examine growth trajectories. Parallel process growth models were used to assess whether initial level and rate of change of one symptom affected the initial level and rate of change of the second symptom. Possible effects of gender and ethnicity were investigated. Alcohol problems significantly increased over time, whereas internalizing symptoms remained relatively stable. The two phenotypes were significantly correlated in each wave, correlations were stronger among males. Initial level of alcohol problems was significantly positively correlated with initial level of internalising symptoms. Rate of change of alcohol problems positively predicted rate of change of internalising symptoms for females but not for males. Rate of change of internalising symptoms did not predict rate of change of alcohol problems for either gender. Participants of Black and Asian ethnicities indicated significantly lower levels of alcohol problems and a lower increase of internalising symptoms across time, compared to White participants. Participants of Black ethnicity also reported significantly lower levels of internalising symptoms compared to White participants. The present findings provide additional support for a positive relationship between alcohol problems and internalising symptoms in youth. Our findings indicated that both internalising symptoms and alcohol problems increased throughout the sample and that the phenotypes were correlated. The findings mainly implied a bi-directional relationship between the phenotypes in terms of significant associations between initial levels as well as rate of change. No direction of causality was indicated in males but significant results were found in females where alcohol problems acted as the main driver for the comorbidity of alcohol problems and internalising symptoms; alcohol may have more detrimental effects in females than in males. Importantly, our study examined a population-based longitudinal sample of college students, revealing that the observed relationships are not limited to individuals with clinically diagnosed mental health or substance use problems.Keywords: alcohol, comorbidity, internalising symptoms, longitudinal modelling
Procedia PDF Downloads 350373 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland
Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski
Abstract:
Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics
Procedia PDF Downloads 401372 Optimal Delivery of Two Similar Products to N Ordered Customers
Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis
Abstract:
The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem
Procedia PDF Downloads 267371 The Creation of Calcium Phosphate Coating on Nitinol Substrate
Authors: Kirill M. Dubovikov, Ekaterina S. Marchenko, Gulsharat A. Baigonakova
Abstract:
NiTi alloys are widely used as implants in medicine due to their unique properties such as superelasticity, shape memory effect and biocompatibility. However, despite these properties, one of the major problems is the release of nickel after prolonged use in the human body under dynamic stress. This occurs due to oxidation and cracking of NiTi implants, which provokes nickel segregation from the matrix to the surface and release into living tissues. As we know, nickel is a toxic element and can cause cancer, allergies, etc. One of the most popular ways to solve this problem is to create a corrosion resistant coating on NiTi. There are many coatings of this type, but not all of them have good biocompatibility, which is very important for medical implants. Coatings based on calcium phosphate phases have excellent biocompatibility because Ca and P are the main constituents of the mineral part of human bone. This fact suggests that a Ca-P coating on NiTi can enhance osteogenesis and accelerate the healing process. Therefore, the aim of this study is to investigate the structure of Ca-P coating on NiTi substrate. Plasma assisted radio frequency (RF) sputtering was used to obtain this film. This method was chosen because it allows the crystallinity and morphology of the Ca-P coating to be controlled by the sputtering parameters. It allows us to obtain three different NiTi samples with Ca-P coating. XRD, AFM, SEM and EDS were used to study the composition, structure and morphology of the coating phase. Scratch tests were carried out to evaluate the adhesion of the coating to the substrate. Wettability tests were used to investigate the hydrophilicity of the different coatings and to suggest which of them had better biocompatibility. XRD showed that the coatings of all samples were hydroxyapatite, but the matrix was represented by TiNi intermetallic compounds such as B2, Ti2Ni and Ni3Ti. The SEM shows that the densest and defect-free coating has only one sample after three hours of sputtering. Wettability tests show that the sample with the densest coating has the lowest contact angle of 40.2° and the largest free surface area of 57.17 mJ/m2, which is mostly disperse. A scratch test was carried out to investigate the adhesion of the coating to the surface and it was shown that all coatings were removed by a cohesive mechanism. However, at a load of 30N, the indenter reached the substrate in two out of three samples, except for the sample with the densest coating. It was concluded that the most promising sputtering mode was the third, which consisted of three hours of deposition. This mode produced a defect-free Ca-P coating with good wettability and adhesion.Keywords: biocompatibility, calcium phosphate coating, NiTi alloy, radio frequency sputtering.
Procedia PDF Downloads 72370 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 240369 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement
Authors: Rajkumar Ghosh
Abstract:
Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.Keywords: earthquake, out-of-sequence thrust, disaster, human life
Procedia PDF Downloads 77368 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis
Authors: Michael Lognoul
Abstract:
In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements
Procedia PDF Downloads 320367 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 177366 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings
Authors: Nadine Maier, Martin Mensinger, Enea Tallushi
Abstract:
In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling
Procedia PDF Downloads 107365 A Supply Chain Risk Management Model Based on Both Qualitative and Quantitative Approaches
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
In today’s business, it is well-recognized that risk is an important factor that needs to be taken into consideration before a decision is made. Studies indicate that both the number of risks faced by organizations and their potential consequences are growing. Supply chain risk management has become one of the major concerns for practitioners and researchers. Supply chain leaders and scholars are now focusing on the importance of managing supply chain risk. In order to meet the challenge of managing and mitigating supply chain risk (SCR), we must first identify the different dimensions of SCR and assess its relevant probability and severity. SCR has been classified in many different ways, and there are no consistently accepted dimensions of SCRs and several different classifications are reported in the literature. Basically, supply chain risks can be classified into two dimensions namely disruption risk and operational risk. Disruption risks are those caused by events such as bankruptcy, natural disasters and terrorist attack. Operational risks are related to supply and demand coordination and uncertainty, such as uncertain demand and uncertain supply. Disruption risks are rare but severe and hard to manage, while operational risk can be reduced through effective SCM activities. Other SCRs include supply risk, process risk, demand risk and technology risk. In fact, the disorganized classification of SCR has created confusion for SCR scholars. Moreover, practitioners need to identify and assess SCR. As such, it is important to have an overarching framework tying all these SCR dimensions together for two reasons. First, it helps researchers use these terms for communication of ideas based on the same concept. Second, a shared understanding of the SCR dimensions will support the researchers to focus on the more important research objective: operationalization of SCR, which is very important for assessing SCR. In general, fresh food supply chain is subject to certain level of risks, such as supply risk (low quality, delivery failure, hot weather etc.) and demand risk (season food imbalance, new competitors). Effective strategies to mitigate fresh food supply chain risk are required to enhance operations. Before implementing effective mitigation strategies, we need to identify the risk sources and evaluate the risk level. However, assessing the supply chain risk is not an easy matter, and existing research mainly use qualitative method, such as risk assessment matrix. To address the relevant issues, this paper aims to analyze the risk factor of the fresh food supply chain using an approach comprising both fuzzy logic and hierarchical holographic modeling techniques. This novel approach is able to take advantage the benefits of both of these well-known techniques and at the same time offset their drawbacks in certain aspects. In order to develop this integrated approach, substantial research work is needed to effectively combine these two techniques in a seamless way, To validate the proposed integrated approach, a case study in a fresh food supply chain company was conducted to verify the feasibility of its functionality in a real environment.Keywords: fresh food supply chain, fuzzy logic, hierarchical holographic modelling, operationalization, supply chain risk
Procedia PDF Downloads 243364 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization
Authors: Aitor Bilbao, Dragos Axinte, John Billingham
Abstract:
The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation
Procedia PDF Downloads 275363 Culture Dimensions of Information Systems Security in Saudi Arabia National Health Services
Authors: Saleh Alumaran, Giampaolo Bella, Feng Chen
Abstract:
The study of organisations’ information security cultures has attracted scholars as well as healthcare services industry to research the topic and find appropriate tools and approaches to develop a positive culture. The vast majority of studies in Saudi national health services are on the use of technology to protect and secure health services information. On the other hand, there is a lack of research on the role and impact of an organisation’s cultural dimensions on information security. This research investigated and analysed the role and impact of cultural dimensions on information security in Saudi Arabia health service. Hypotheses were tested and two surveys were carried out in order to collect data and information from three major hospitals in Saudi Arabia (SA). The first survey identified the main cultural-dimension problems in SA health services and developed an initial information security culture framework model. The second survey evaluated and tested the developed framework model to test its usefulness, reliability and applicability. The model is based on human behaviour theory, where the individual’s attitude is the key element of the individual’s intention to behave as well as of his or her actual behaviour. The research identified six cultural dimensions: Saudi national culture, Saudi health service leadership, employees’ trust, technology, multicultural interactions and employees’ job roles. The research also identified a set of cultural sub-dimensions. These include working values and norms, tribe values and norms, attitudes towards women, power sharing, vision, social interaction, respect and understanding, hospital intra-net, hospital employees’ language(s) used, multi-national culture, communication system, employees’ job satisfaction and job security. The research identified that (a) the human behaviour towards medical information in SA is one of the main threats to information security and one of the main challenges to SA health authority, (b) The current situation of SA hospitals’ IS cultures is falling short in protecting medical information due to the current value and norms towards information security, (c) Saudi national culture and employees’ job role are the main dimensions playing major roles in the employees’ attitude, and technology is the least important dimension playing a role in the employees’ attitudes.Keywords: cultural dimension, electronic health record, information security, privacy
Procedia PDF Downloads 351362 725 Arcadia Street in Pretoria: A Pretoria Case Study Focusing on Urban Acupuncture
Authors: Konrad Steyn, Jacques Laubscher
Abstract:
South African urban design solutions are mostly aligned with European and North American models that are often not appropriate in addressing some of this country’s challenges such as multiculturalism and decaying urban areas. Sustainable urban redevelopment in South Africa should be comprehensive in nature, sensitive in its manifestation, and should be robust and inclusive in order to achieve social relevance. This paper argues that the success of an urban design intervention is largely dependent on the public’s perceptions and expectations, and the way people participate in shaping their environments. The concept of sustainable urbanism is thus more comprehensive than – yet should undoubtedly include – methods of construction, material usage and climate control principles. The case study is a central element of this research paper. 725 Arcadia Street in Pretoria, was originally commissioned as a food market structure. A starkly contrasting existing modernist adjacent building forms the morphological background. Built in 1969, it is a valuable part of Pretoria’s modernist fabric. It was realised early on that the project should not be a mere localised architectural intervention, but rather an occasion to revitalise the neighbourhood through urban regeneration. Because of the complex and comprehensive nature of the site and rich cultural diversity of the area, a multi-faceted approach seemed the most appropriate response. The methodology for collating data consisted of a combination of literature reviews (regarding the historic original fauna and flora and current plants, observation (frequent site visits) and physical surveying on the neighbourhood level (physical location, connectivity to surrounding landmarks as well as movement systems and pedestrian flows). This was followed by an exploratory design phase, culminating in the present redevelopment proposal. Since built environment interventions are increasingly based on generalised normative guidelines, an approach focusing of urban acupuncture could serve as an alternative. Celebrating the specific urban condition, urban acupuncture offers an opportunity to influence the surrounding urban fabric and achieve urban renewal through physical, social and cultural mediation.Keywords: neighbourhood, urban renewal, South African urban design solutions, sustainable urban redevelopment
Procedia PDF Downloads 496361 Conflicts of Interest in the Private Sector and the Significance of the Public Interest Test
Authors: Opemiposi Adegbulu
Abstract:
Conflicts of interest is an elusive, diverse and engaging subject, a cross-cutting problem of governance; all levels of governance, ranging from local to global, public to corporate or financial sectors. In all these areas, its mismanagement could lead to the distortion of decision-making processes, corrosion of trust and the weakening of administration. According to Professor Peters, an expert in the area, conflict of interest, a problem at the root of many scandals has “become a pervasive ethical concern in our professional, organisational, and political life”. Conflicts of interest corrode trust, and like in the public sector, trust is mandatory for the market, consumers/clients, shareholders and other stakeholders in the private sector. However, conflicts of interest in the private sector are distinct and must be treated in like manner when regulatory efforts are made to address them. The research looks at identifying conflicts of interest in the private sector and differentiating them from those in the public sector. The public interest is submitted as a criterion which allows for such differentiation. This is significant because it would for the use of tailor-made or sector-specific approaches to addressing this complex issue. This is conducted through extensive review of literature and theories on the definition of conflicts of interest. This study will employ theoretical, doctrinal and comparative methods. The nature of conflicts of interest in the private sector will be explored, through an analysis of the public sector where the notion of conflicts of interest appears more clearly identified, reasons, why they are of business ethics concern, will be advanced, and then, once again, looking at public sector solutions and other solutions, the study will identify ways of mitigating and managing conflicts in the private sector. An exploration of public sector conflicts of interest and solutions will be carried out because the typologies of conflicts of interest in both sectors appear very similar at the core and thus, lessons can be learnt with regards to the management of these issues in the private sector. Conflicts of interest corrode trust, and like in the public sector, trust is mandatory for the market, consumers/clients, shareholders and other stakeholders in the private sector. This research will then focus on some specific challenges to understanding and identifying conflicts of interest in the private sector; origin, diverging theories, the psychological barrier to the definition, similarities with public sector conflicts of interest due to the notions of corrosion of trust, ‘being in a particular kind of situation,’ etc. The notion of public interest will be submitted as a key element at the heart of the distinction between public sector and private sector conflicts of interests. It will then be proposed that the appreciation of the notion of conflicts of interest differ according to sector, country to country, based on the public interest test, using the United Kingdom (UK), the United States of America (US), France and the Philippines as illustrations.Keywords: conflicts of interest, corporate governance, global governance, public interest
Procedia PDF Downloads 401360 Virtual Metrology for Copper Clad Laminate Manufacturing
Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho
Abstract:
In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology
Procedia PDF Downloads 351359 Stigma Impacts the Quality of Life of People Living with Diabetes Mellitus in Switzerland: Challenges for Social Work
Authors: Daniel Gredig, Annabelle Bartelsen-Raemy
Abstract:
Social work services offered to people living with diabetes tend to be moulded by the prevailing understanding that social work is to support people living with diabetes in their adherence to medical prescription and/or life style changes. As diabetes has been conceived as a condition facing no stigma, discrimination of people living with diabetes has not been considered. However, there is growing evidence of stigma. To our knowledge, nevertheless, there have been no comprehensive, in-depth studies of stigma and its impact. Against this background and challenging the present layout of services for people living with diabetes, the present study aimed to establish whether: -people living with diabetes in Switzerland experience stigma, and if so, in what context and to what extent; -experiencing stigma impacts the quality of life of those affected. It was hypothesized that stigma would impact on their quality of life. It was further hypothesized that low self-esteem, psychological distress, depression, and a lack of social support would be mediating factors. For data collection an anonymous paper-and-pencil self-administered questionnaire was used which drew on a qualitative elicitation study. Data were analysed using descriptive statistics and structural equation modelling. To generate a large and diverse convenience sample the questionnaire was distributed to the readers of journal destined to diabetics living in Switzerland issued in German and French. The sample included 3347 people with type 1 and 2 diabetes, aged 16–96, living in diverse living conditions in the German- and French-speaking areas of Switzerland. Respondents reported experiences of discrimination in various contexts and stereotyping based on the belief that diabetics have a low work performance; are inefficient in the workplace; inferior; weak-willed in their ability to manage health-related issues; take advantage of their condition and are viewed as pitiful or sick people. Respondents who reported higher levels of perceived stigma reported higher levels of psychological distress (β = .37), more pronounced depressive symptoms (β=.33), and less social support (β = -.22). Higher psychological distress (β = -.29) and more pronounced depressive symptoms (β = -.28), in turn, predicted lower quality of life. These research findings challenge the prevailing understanding of social work services for people living with diabetes in Switzerland and beyond. They call for a less individualistic approach, the consideration of the social context service users are placed in their everyday life, and addressing stigma. So, social work could partner with people living with diabetes in order to fight against discrimination and stereotypes. This could include identifying and designing educational and public awareness strategies. In direct social work with people living with diabetes, this could include broaching experiences of stigma and modes of coping with. This study was carried out in collaboration with the Swiss Diabetes Association. The association accepted the challenging conclusions from this study. It connected to the results and is currently discussing the priorities and courses of action to be taken.Keywords: diabetes, discrimination, quality of life, services, stigma
Procedia PDF Downloads 230358 Carbapenem Usage in Medical Wards: An Antibiotic Stewardship Feedback Project
Authors: Choon Seong Ng, P. Petrick, C. L. Lau
Abstract:
Background: Carbapenem-resistant isolates have been increasingly reported recently. Carbapenem stewardship is designed to optimize its usage particularly among medical wards with high prevalence of carbapenem prescriptions to combat such emerging resistance. Carbapenem stewardship programmes (CSP) can reduce antibiotic use but clinical outcome of such measures needs further evaluation. We examined this in a prospective manner using feedback mechanism. Methods: Our single-center prospective cohort study involved all carbapenem prescriptions across the medical wards (including medical patients admitted to intensive care unit) in a tertiary university hospital setting. The impact of such stewardship was analysed according to the accepted and the rejected groups. The primary endpoint was safety. Safety measure applied in this study was the death at 1 month. Secondary endpoints included length of hospitalisation and readmission. Results: Over the 19 months’ period, input from 144 carbapenem prescriptions was analysed on the basis of acceptance of our CSP recommendations on the use of carbapenems. Recommendations made were as follows : de-escalation of carbapenem; stopping the carbapenem; use for a short duration of 5-7 days; required prolonged duration in the case of carbapenem-sensitive Extended Spectrum Beta-Lactamases bacteremia; dose adjustment; and surgical intervention for removal of septic foci. De-escalation, shorten duration of carbapenem and carbapenem cessation comprised 79% of the recommendations. Acceptance rate was 57%. Those who accepted CSP recommendations had no increase in mortality (p = 0.92), had a shorter length of hospital stay (LOS) and had cost-saving. Infection-related deaths were found to be higher among those in the rejected group. Moreover, three rejected cases (6%) among all non-indicated cases (n = 50) were found to have developed carbapenem-resistant isolates. Lastly, Pitt’s bacteremia score appeared to be a key element affecting the carbapenem prescription’s behaviour in this trial. Conclusions: Carbapenem stewardship program in the medical wards not only saves money, but most importantly it is safe and does not harm the patients with added benefits of reducing the length of hospital stay. However, more time is needed to engage the primary clinical teams by formal clinical presentation and immediate personal feedback by senior Infectious Disease (ID) personnel to increase its acceptance.Keywords: audit and feedback, carbapenem stewardship, medical wards, university hospital
Procedia PDF Downloads 204357 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks
Authors: Mazarine Roquet, Pierre Dewallef
Abstract:
The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating
Procedia PDF Downloads 83356 Interplay of Material and Cycle Design in a Vacuum-Temperature Swing Adsorption Process for Biogas Upgrading
Authors: Federico Capra, Emanuele Martelli, Matteo Gazzani, Marco Mazzotti, Maurizio Notaro
Abstract:
Natural gas is a major energy source in the current global economy, contributing to roughly 21% of the total primary energy consumption. Production of natural gas starting from renewable energy sources is key to limit the related CO2 emissions, especially for those sectors that heavily rely on natural gas use. In this context, biomethane produced via biogas upgrading represents a good candidate for partial substitution of fossil natural gas. The upgrading process of biogas to biomethane consists in (i) the removal of pollutants and impurities (e.g. H2S, siloxanes, ammonia, water), and (ii) the separation of carbon dioxide from methane. Focusing on the CO2 removal process, several technologies can be considered: chemical or physical absorption with solvents (e.g. water, amines), membranes, adsorption-based systems (PSA). However, none emerged as the leading technology, because of (i) the heterogeneity in plant size, ii) the heterogeneity in biogas composition, which is strongly related to the feedstock type (animal manure, sewage treatment, landfill products), (iii) the case-sensitive optimal tradeoff between purity and recovery of biomethane, and iv) the destination of the produced biomethane (grid injection, CHP applications, transportation sector). With this contribution, we explore the use of a technology for biogas upgrading and we compare the resulting performance with benchmark technologies. The proposed technology makes use of a chemical sorbent, which is engineered by RSE and consists of Di-Ethanol-Amine deposited on a solid support made of γ-Alumina, to chemically adsorb the CO2 contained in the gas. The material is packed into fixed beds that cyclically undergo adsorption and regeneration steps. CO2 is adsorbed at low temperature and ambient pressure (or slightly above) while the regeneration is carried out by pulling vacuum and increasing the temperature of the bed (vacuum-temperature swing adsorption - VTSA). Dynamic adsorption tests were performed by RSE and were used to tune the mathematical model of the process, including material and transport parameters (i.e. Langmuir isotherms data and heat and mass transport). Based on this set of data, an optimal VTSA cycle was designed. The results enabled a better understanding of the interplay between material and cycle tuning. As exemplary application, the upgrading of biogas for grid injection, produced by an anaerobic digester (60-70% CO2, 30-40% CH4), for an equivalent size of 1 MWel was selected. A plant configuration is proposed to maximize heat recovery and minimize the energy consumption of the process. The resulting performances are very promising compared to benchmark solutions, which make the VTSA configuration a valuable alternative for biomethane production starting from biogas.Keywords: biogas upgrading, biogas upgrading energetic cost, CO2 adsorption, VTSA process modelling
Procedia PDF Downloads 277355 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator
Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo
Abstract:
Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors
Procedia PDF Downloads 407354 Cognitive Translation and Conceptual Wine Tasting Metaphors: A Corpus-Based Research
Authors: Christine Demaecker
Abstract:
Many researchers have underlined the importance of metaphors in specialised language. Their use of specific domains helps us understand the conceptualisations used to communicate new ideas or difficult topics. Within the wide area of specialised discourse, wine tasting is a very specific example because it is almost exclusively metaphoric. Wine tasting metaphors express various conceptualisations. They are not linguistic but rather conceptual, as defined by Lakoff & Johnson. They correspond to the linguistic expression of a mental projection from a well-known or more concrete source domain onto the target domain, which is the taste of wine. But unlike most specialised terminologies, the vocabulary is never clearly defined. When metaphorical terms are listed in dictionaries, their definitions remain vague, unclear, and circular. They cannot be replaced by literal linguistic expressions. This makes it impossible to transfer them into another language with the traditional linguistic translation methods. Qualitative research investigates whether wine tasting metaphors could rather be translated with the cognitive translation process, as well described by Nili Mandelblit (1995). The research is based on a corpus compiled from two high-profile wine guides; the Parker’s Wine Buyer’s Guide and its translation into French and the Guide Hachette des Vins and its translation into English. In this small corpus with a total of 68,826 words, 170 metaphoric expressions have been identified in the original English text and 180 in the original French text. They have been selected with the MIPVU Metaphor Identification Procedure developed at the Vrije Universiteit Amsterdam. The selection demonstrates that both languages use the same set of conceptualisations, which are often combined in wine tasting notes, creating conceptual integrations or blends. The comparison of expressions in the source and target texts also demonstrates the use of the cognitive translation approach. In accordance with the principle of relevance, the translation always uses target language conceptualisations, but compared to the original, the highlighting of the projection is often different. Also, when original metaphors are complex with a combination of conceptualisations, at least one element of the original metaphor underlies the target expression. This approach perfectly integrates into Lederer’s interpretative model of translation (2006). In this triangular model, the transfer of conceptualisation could be included at the level of ‘deverbalisation/reverbalisation’, the crucial stage of the model, where the extraction of meaning combines with the encyclopedic background to generate the target text.Keywords: cognitive translation, conceptual integration, conceptual metaphor, interpretative model of translation, wine tasting metaphor
Procedia PDF Downloads 131353 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA
Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray
Abstract:
Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration
Procedia PDF Downloads 66352 Prospects of Iraq’s Maritime Openness and Their Effect on Its Economy
Authors: Mohanad Hammad
Abstract:
Port institutions serve as a link connecting the land areas that receive the goods and the areas from where ships sail. These areas hold great significance for the conversion of goods into commodities of economic value, capable of meeting the needs of the society. Development of ports constitutes a fundamental component of the comprehensive economic development process. Recognizing this fact, developing countries have always resorted to this infrastructural element to resolve the numerous problems they face, taking into account its contribution to the reformation of their economic conditions. Iraqi ports have played a major role in boosting the commercial movement in Iraq, as they are the starting point of its oil exports and a key constituent in fulfilling the consumer and production needs of the various economic sectors of Iraq. With the Gulf wars and the economic blockade, Iraqi ports have continued to deteriorate and become unable to perform their functions as first-generation ports, prompting Iraq to use the ports of neighboring countries such as Jordan's Aqaba commercial port. Meanwhile, Iraqi ports face strong competition from the ports of neighboring countries, which have achieved progress and advancement as opposed to the declining performance and efficiency of Iraqi ports. The great developments in the economic conditions of Iraq lay a too great burden on the Iraqi maritime transport and ports, which require development in order to be able to meet the challenges arising from the fierce international and regional competition in the markets. Therefore, it is necessary to find appropriate solutions in support of the role that can be played by Iraqi ports in serving Iraq's foreign trade transported by sea and in keeping up with the development of foreign trade. Thus, this research aims at tackling the current situation of the Iraqi ports and their commercial activity and studying the problems and obstacles they face. The research also studies the future prospects of these ports, the potentials of maritime openness to Iraq under the fierce competition of neighboring ports, and the possibility of enhancing Iraqi ports’ competitiveness. Among the results produced by this research is the future scenario it proposes for Iraqi ports, mainly represented in the establishment of Al-Faw Port, which will contribute to a greater openness of maritime transport in Iraq, and the rehabilitation and expansion of existing ports. This research seeks to develop solutions to Iraq ports so that they can be repositioned as a vital means of promoting economic development.Keywords: maritime transport, port, future prospects, regional integration
Procedia PDF Downloads 147351 Language in Court: Ideology, Power and Cognition
Authors: Mehdi Damaliamiri
Abstract:
Undoubtedly, the power of language is hardly a new topic; indeed, the persuasive power of language accompanied by ideology has long been recognized in different aspects of life. The two and a half thousand-year-old Bisitun inscriptions in Iran, proclaiming the victories of the Persian King, Darius, are considered by some historians to have been an early example of the use of propaganda. Added to this, the modern age is the true cradle of fully-fledged ideologies and the ongoing process of centrifugal ideologization. The most visible work on ideology today within the field of linguistics is “Critical Discourse Analysis” (CDA). The focus of CDA is on “uncovering injustice, inequality, taking sides with the powerless and suppressed” and making “mechanisms of manipulation, discrimination, demagogy, and propaganda explicit and transparent.” possible way of relating language to ideology is to propose that ideology and language are inextricably intertwined. From this perspective, language is always ideological, and ideology depends on the language. All language use involves ideology, and so ideology is ubiquitous – in our everyday encounters, as much as in the business of the struggle for power within and between the nation-states and social statuses. At the same time, ideology requires language. Its key characteristics – its power and pervasiveness, its mechanisms for continuity and for change – all come out of the inner organization of language. The two phenomena are homologous: they share the same evolutionary trajectory. To get a more robust portrait of the power and ideology, we need to examine its potential place in the structure, and consider how such structures pattern in terms of the functional elements which organize meanings in the clause. This is based on the belief that all grammatical, including syntactic, knowledge is stored mentally as constructions have become immensely popular. When the structure of the clause is taken into account, the power and ideology have a preference for Complement over Subject and Adjunct. The subject is a central interpersonal element in discourse: it is one of two elements that form the central interactive nub of a proposition. Conceptually, there are countless ways of construing a given event and linguistically, a variety of grammatical devices that are usually available as alternate means of coding a given conception, such as political crime and corruption. In the theory of construal, then, which, like transitivity in Halliday, makes options available, Cognitive Linguistics can offer a cognitive account of ideology in language, where ideology is made possible by the choices a language allows for representing the same material situation in different ways. The possibility of promoting alternative construals of the same reality means that any particular choice in representation is always ideologically constrained or motivated and indicates the perspective and interests of the text-producer.Keywords: power, ideology, court, discourse
Procedia PDF Downloads 163350 Petrology and Petrochemistry of Basement Rocks in Ila Orangun Area, Southwestern Nigeria
Authors: Jayeola A. O., Ayodele O. S., Olususi J. I.
Abstract:
From field studies, six (6) lithological units were identified to be common around the study area, which includes quartzites, granites, granite gneiss, porphyritic granites, amphibolite and pegmatites. Petrographical analysis was done to establish the major mineral assemblages and accessory minerals present in selected rock samples, which represents the major rock types in the area. For the purpose of this study, twenty (20) pulverized rock samples were taken to the laboratory for geochemical analysis with their results used in the classification, as well as suggest the geochemical attributes of the rocks. Results from petrographical studies of the rocks under both plane and cross polarized lights revealed the major minerals identified under thin sections to include quartz, feldspar, biotite, hornblende, plagioclase and muscovite with opaque other accessory minerals, which include actinolite, spinel and myrmekite. Geochemical results obtained and interpreted using various geochemical plots or discrimination plots all classified the rocks in the area as belonging to both the peralkaline metaluminous and peraluminous types. Results for the major oxides ratios produced for Na₂O/K₂O, Al₂O₃/Na₂O + CaO + K₂O and Na₂O + CaO + K₂O/Al₂O₃ show the excess of alumina, Al₂O₃ over the alkaline Na₂O +CaO +K₂O thus suggesting peraluminous rocks. While the excess of the alkali over the alumina suggests the peralkaline metaluminous rock type. The results of correlation coefficient show a perfect strong positive correlation, which shows that they are of same geogenic sources, while negative correlation coefficient values indicate a perfect weak negative correlation, suggesting that they are of heterogeneous geogenic sources. From factor analysis, five component groups were identified as Group 1 consists of Ag-Cr-Ni elemental associations suggesting Ag, Cr, and Ni mineralization, predicting the possibility of sulphide mineralization. in the study area. Group ll and lll consist of As-Ni-Hg-Fe-Sn-Co-Pb-Hg element association, which are pathfinder elements to the mineralization of gold. Group 1V and V consist of Cd-Cu-Ag-Co-Zn, which concentrations are significant to elemental associations and mineralization. In conclusion, from the potassium radiometric anomaly map produced, the eastern section (northeastern and southeastern) is observed to be the hot spot and mineralization zone for the study area.Keywords: petrography, Ila Orangun, petrochemistry, pegmatites, peraluminous
Procedia PDF Downloads 63349 Polyphenol-Rich Aronia Melanocarpa Juice Consumption and Line-1 Dna Methylation in a Cohort at Cardiovascular Risk
Authors: Ljiljana Stojković, Manja Zec, Maja Zivkovic, Maja Bundalo, Marija Glibetić, Dragan Alavantić, Aleksandra Stankovic
Abstract:
Cardiovascular disease (CVD) is associated with alterations in DNA methylation, the latter modulated by dietary polyphenols. The present pilot study (part of the original clinical study registered as NCT02800967 at www.clinicaltrials.gov) aimed to investigate the impact of 4-week daily consumption of polyphenol-rich Aronia melanocarpa juice on Long Interspersed Nucleotide Element-1 (LINE-1) methylation in peripheral blood leukocytes, in subjects (n=34, age of 41.1±6.6 years) at moderate CVD risk, including an increased body mass index, central obesity, high normal blood pressure and/or dyslipidemia. The goal was also to examine whether factors known to affect DNA methylation, such as folate intake levels, MTHFR C677T gene variant, as well as the anthropometric and metabolic parameters, modulated the LINE-1 methylation levels upon consumption of polyphenol-rich Aronia juice. The experimental analysis of LINE-1 methylation was done by the MethyLight method. MTHFR C677T genotypes were determined by the polymerase chain reaction-restriction fragment length polymorphism method. Folate intake was assessed by processing the data from the food frequency questionnaire and repeated 24-hour dietary recalls. Serum lipid profile was determined by using Roche Diagnostics kits. The statistical analyses were performed using the Statistica software package. In women, after vs. before the treatment period, a significant decrease in LINE-1 methylation levels was observed (97.54±1.50% vs. 98.39±0.86%, respectively; P=0.01). The change (after vs. before treatment) in LINE-1 methylation correlated directly with MTHFR 677T allele presence, average daily folate intake and the change in serum low-density lipoprotein cholesterol, while inversely with the change in serum triacylglycerols (R=0.72, R2=0.52, adjusted R2=0.36, P=0.03). The current results imply potential cardioprotective effects of habitual polyphenol-rich Aronia juice consumption achieved through the modifications of DNA methylation pattern in subjects at CVD risk, which should be further confirmed. Hence, the precision nutrition-driven modulations of DNA methylation may become targets for new approaches in the prevention and treatment of CVD.Keywords: Aronia melanocarpa, cardiovascular risk, LINE-1, methylation, peripheral blood leukocytes, polyphenol
Procedia PDF Downloads 195348 Academic Knowledge Transfer Units in the Western Balkans: Building Service Capacity and Shaping the Business Model
Authors: Andrea Bikfalvi, Josep Llach, Ferran Lazaro, Bojan Jovanovski
Abstract:
Due to the continuous need to foster university-business cooperation in both developed and developing countries, some higher education institutions face the challenge of designing, piloting, operating, and consolidating knowledge and technology transfer units. University-business cooperation has different maturity stages worldwide, with some higher education institutions excelling in these practices, but with lots of others that could be qualified as intermediate, or even some situated at the very beginning of their knowledge transfer adventure. These latter face the imminent necessity to formally create the technology transfer unit and to draw its roadmap. The complexity of this operation is due to various aspects that need to align and coordinate, including a major change in mission, vision, structure, priorities, and operations. Qualitative in approach, this study presents 5 case studies, consisting of higher education institutions located in the Western Balkans – 2 in Albania, 2 in Bosnia and Herzegovina, 1 in Montenegro- fully immersed in the entrepreneurial journey of creating their knowledge and technology transfer unit. The empirical evidence is developed in a pan-European project, illustratively called KnowHub (reconnecting universities and enterprises to unleash regional innovation and entrepreneurial activity), which is being implemented in three countries and has resulted in at least 15 pilot cooperation agreements between academia and business. Based on a peer-mentoring approach including more experimented and more mature technology transfer models of European partners located in Spain, Finland, and Austria, a series of initial lessons learned are already available. The findings show that each unit developed its tailor-made approach to engage with internal and external stakeholders, offer value to the academic staff, students, as well as business partners. The latest technology underpinning KnowHub services and institutional commitment are found to be key success factors. Although specific strategies and plans differ, they are based on a general strategy jointly developed and based on common tools and methods of strategic planning and business modelling. The main output consists of providing good practice for designing, piloting, and initial operations of units aiming to fully valorise knowledge and expertise available in academia. Policymakers can also find valuable hints on key aspects considered vital for initial operations. The value of this contribution is its focus on the intersection of three perspectives (service orientation, organisational innovation, business model) since previous research has only relied on a single topic or dual approaches, most frequently in the business context and less frequently in higher education.Keywords: business model, capacity building, entrepreneurial education, knowledge transfer
Procedia PDF Downloads 141