Search results for: variable renewable energy sources
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13304

Search results for: variable renewable energy sources

1874 Fractional, Component and Morphological Composition of Ambient Air Dust in the Areas of Mining Industry

Authors: S.V. Kleyn, S.Yu. Zagorodnov, А.А. Kokoulina

Abstract:

Technogenic emissions of the mining and processing complex are characterized by a high content of chemical components and solid dust particles. However, each industrial enterprise and the surrounding area have features that require refinement and parameterization. Numerous studies have shown the negative impact of fine dust PM10 and PM2.5 on the health, as well as the possibility of toxic components absorption, including heavy metals by dust particles. The target of the study was the quantitative assessment of the fractional and particle size composition of ambient air dust in the area of impact by primary magnesium production complex. Also, we tried to describe the morphology features of dust particles. Study methods. To identify the dust emission sources, the analysis of the production process has been carried out. The particulate composition of the emissions was measured using laser particle analyzer Microtrac S3500 (covered range of particle size is 20 nm to 2000 km). Particle morphology and the component composition were established by electron microscopy by scanning microscope of high resolution (magnification rate - 5 to 300 000 times) with X-ray fluorescence device S3400N ‘HITACHI’. The chemical composition was identified by X-ray analysis of the samples using an X-ray diffractometer XRD-700 ‘Shimadzu’. Determination of the dust pollution level was carried out using model calculations of emissions in the atmosphere dispersion. The calculations were verified by instrumental studies. Results of the study. The results demonstrated that the dust emissions of different technical processes are heterogeneous and fractional structure is complicated. The percentage of particle sizes up to 2.5 micrometres inclusive was ranged from 0.00 to 56.70%; particle sizes less than 10 microns inclusive – 0.00 - 85.60%; particle sizes greater than 10 microns - 14.40% -100.00%. During microscopy, the presence of nanoscale size particles has been detected. Studied dust particles are round, irregular, cubic and integral shapes. The composition of the dust includes magnesium, sodium, potassium, calcium, iron, chlorine. On the base of obtained results, it was performed the model calculations of dust emissions dispersion and establishment of the areas of fine dust РМ 10 and РМ 2.5 distribution. It was found that the dust emissions of fine powder fractions PM10 and PM2.5 are dispersed over large distances and beyond the border of the industrial site of the enterprise. The population living near the enterprise is exposed to the risk of diseases associated with dust exposure. Data are transferred to the economic entity to make decisions on the measures to minimize the risks. Exposure and risks indicators on the health are used to provide named patient health and preventive care to the citizens living in the area of negative impact of the facility.

Keywords: dust emissions, еxposure assessment, PM 10, PM 2.5

Procedia PDF Downloads 251
1873 Empowering South African Female Farmers through Organic Lamb Production: A Cost Analysis Case Study

Authors: J. M. Geyser

Abstract:

Lamb is a popular meat throughout the world, particularly in Europe, the Middle East and Oceania. However, the conventional lamb industry faces challenges related to environmental sustainability, climate change, consumer health and dwindling profit margins. This has stimulated an increasing demand for organic lamb, as it is perceived to increase environmental sustainability, offer superior quality, taste, and nutritional value, which is appealing to farmers, including small-scale and female farmers, as it often commands a premium price. Despite its advantages, organic lamb production presents challenges, with a significant hurdle being the high production costs encompassing organic certification, lower stocking rates, higher mortality rates and marketing cost. These costs impact the profitability and competitiveness or organic lamb producers, particularly female and small-scale farmers, who often encounter additional obstacles, such as limited access to resources and markets. Therefore, this paper examines the cost of producing organic lambs and its impact on female farmers and raises the research question: “Is organic lamb production the saving grace for female and small-scale farmers?” Objectives include estimating and comparing production costs and profitability or organic lamb production with conventional lamb production, analyzing influencing factors, and assessing opportunities and challenges for female and small-scale farmers. The hypothesis states that organic lamb production can be a viable and beneficial option for female and small-scale farmers, provided that they can overcome high production costs and access premium markets. The study uses a mixed-method approach, combining qualitative and quantitative data. Qualitative data involves semi-structured interviews with ten female and small-scale farmers engaged in organic lamb production in South Africa. The interview covered topics such as farm characteristics, practices, cost components, mortality rates, income sources and empowerment indicators. Quantitative data used secondary published information and primary data from a female farmer. The research findings indicate that when a female farmer moves from conventional lamb production to organic lamb production, the cost in the first year of organic lamb production exceed those of conventional lamb production by over 100%. This is due to lower stocking rates and higher mortality rates in the organic system. However, costs start decreasing in the second year as stocking rates increase due to manure applications on grazing and lower mortality rates due to better worm resistance in the herd. In conclusion, this article sheds light on the economic dynamics of organic lamb production, particularly focusing on its impact on female farmers. To empower female farmers and to promote sustainable agricultural practices, it is imperative to understand the cost structures and profitability of organic lamb production.

Keywords: cost analysis, empowerment, female farmers, organic lamb production

Procedia PDF Downloads 72
1872 Reinforcing The Nagoya Protocol through a Coherent Global Intellectual Property Framework: Effective Protection for Traditional Knowledge Associated with Genetic Resources in Biodiverse African States

Authors: Oluwatobiloba Moody

Abstract:

On October 12, 2014, the Nagoya Protocol, negotiated by Parties to the Convention on Biological Diversity (CBD), entered into force. The Protocol was negotiated to implement the third objective of the CBD which relates to the fair and equitable sharing of benefits arising from the utilization of genetic resources (GRs). The Protocol aims to ‘protect’ GRs and traditional knowledge (TK) associated with GRs from ‘biopiracy’, through the establishment of a binding international regime on access and benefit sharing (ABS). In reflecting on the question of ‘effectiveness’ in the Protocol’s implementation, this paper argues that the underlying problem of ‘biopiracy’, which the Protocol seeks to address, is one which goes beyond the ABS regime. It rather thrives due to indispensable factors emanating from the global intellectual property (IP) regime. It contends that biopiracy therefore constitutes an international problem of ‘borders’ as much as of ‘regimes’ and, therefore, while the implementation of the Protocol may effectively address the ‘trans-border’ issues which have hitherto troubled African provider countries in their establishment of regulatory mechanisms, it remains unable to address the ‘trans-regime’ issues related to the eradication of biopiracy, especially those issues which involve the IP regime. This is due to the glaring incoherence in the Nagoya Protocol’s implementation and the existing global IP system. In arriving at conclusions, the paper examines the ongoing related discussions within the IP regime, specifically those within the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC) and the WTO TRIPS Council. It concludes that the Protocol’s effectiveness in protecting TK associated with GRs is conditional on the attainment of outcomes, within the ongoing negotiations of the IP regime, which could be implemented in a coherent manner with the Nagoya Protocol. It proposes specific ways to achieve this coherence. Three main methodological steps have been incorporated in the paper’s development. First, a review of data accumulated over a two year period arising from the coordination of six important negotiating sessions of the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore. In this respect, the research benefits from reflections on the political, institutional and substantive nuances which have coloured the IP negotiations and which provide both the context and subtext to emerging texts. Second, a desktop review of the history, nature and significance of the Nagoya Protocol, using relevant primary and secondary literature from international and national sources. Third, a comparative analysis of selected biopiracy cases is undertaken for the purpose of establishing the inseparability of the IP regime and the ABS regime in the conceptualization and development of solutions to biopiracy. A comparative analysis of select African regulatory mechanisms (Kenya, South Africa and Ethiopia and the ARIPO Swakopmund Protocol) for the protection of TK is also undertaken.

Keywords: biopiracy, intellectual property, Nagoya protocol, traditional knowledge

Procedia PDF Downloads 427
1871 Electromyographic Analysis of Biceps Brachii during Golf Swing and Review of Its Impact on Return to Play Following Tendon Surgery

Authors: Amin Masoumiganjgah, Luke Salmon, Julianne Burnton, Fahimeh Bagheri, Gavin Lenton, S. L. Ezekial Tan

Abstract:

Introduction: The incidence of proximal biceps tenodesis and acute distal biceps repair is increasing, and rehabilitation protocols following both are variable. Golf is a popular sport within Australia, and the Gold Coast has become a Mecca for golfers, with more courses per capita than anywhere else in the world. Currently, there are no clear guidelines regarding return to golf play following biceps procedures. The aim of this study was to determine biceps brachii activation during the golf swing through electromyographic analysis, and subsequently, aid in rehabilitation guidelines and return to golf following tenodesis and repair. Methods: Subjects were amateur golfers with no previous upper limb surgery. Surface electromyography (EMG) and high-speed video recording were used to analyse activation of the left and right biceps brachii and the anterior deltoid during the golf swing. Each participant’s maximum voluntary contraction (MVC) was recorded, and they were then required to hit a golf ball aiming for specific distances of 2, 50, 100 and 150 metres at a driving range. Noraxon myoResearch and Matlab were used for data analysis. Mean % MVC was calculated for leading and trailing arms during the full swing and its’ 4 phases: back-swing, acceleration, early follow-through and late follow-through. Results: 12 golfers (2 female and 10 male), participated in the study. Median age was 27 (25 – 38), with all being right handed. Over all distances, the mean activation of the short and long head of biceps brachii was < 10% through the full swing. When breaking down the 50, 100 and 150m swing into phases, mean MVC activation was lowest in backswing (5.1%), followed by acceleration (9.7%), early follow-through (9.2%), and late follow-through (21.4%). There was more variation and slightly higher activation in the right biceps (trailing arm) in backswing, acceleration, and early follow-through; with higher activation in the leading arm in late follow-through (25.4% leading, 17.3% trailing). 2m putts resulted in low MVC values (3.1% ) with little variation across swing phases. There was considerable individual variation in results – one tense subject averaged 11.0% biceps MVC through the 2m putting stroke and others recorded peak mean MVC biceps activations of 68.9% at 50m, 101.3% at 100m, and 111.3% at 150m. Discussion: Previous studies have investigated the role of rotator cuff, spine, and hip muscles during the golf swing however, to our knowledge, this is the first study that investigates the activation of biceps brachii. Many rehabilitation programs following a biceps tenodesis or repair allow active range against gravity and restrict strengthening exercises until 6 weeks, and this does not appear to be associated with any adverse outcome. Previous studies demonstrate a range of < 10% MVC is similar to the unloaded biceps brachii during walking(1), active elbow flexion with the hand positioned either in pronation or supination will produce MVC < 20% throughout range(2) and elbow flexion with a 4kg dumbbell can produce mean MVC’s of around 40%(3). Our study demonstrates that increasing activation is associated with the leading arm, increasing shot distance and the late follow-through phase. Although the cohort mean MVC of the biceps brachii is <10% through the full swing, variability is high and biceps activation reach peak mean MVC’s of over 100% in different swing phases for some individuals. Given these EMG values, caution is advised when advising patients post biceps procedures to return to long distance golf shots, particularly when the leading arm is involved. Even though it would appear that putting would be as safe as having an unloaded hand out of a sling following biceps procedures, the variability of activation patterns across different golfers would lead us to caution against accelerated golf rehabilitation in those who may be particularly tense golfers. The 50m short iron shot was too long to consider as a chip shot and more work can be done in this area to determine the safety of chipping.

Keywords: electromyographic analysis, biceps brachii rupture, golf swing, tendon surgery

Procedia PDF Downloads 73
1870 Decontamination of Chromium Containing Ground Water by Adsorption Using Chemically Modified Activated Carbon Fabric

Authors: J. R. Mudakavi, K. Puttanna

Abstract:

Chromium in the environment is considered as one of the most toxic elements probably next only to mercury and arsenic. It is acutely toxic, mutagenic and carcinogenic in the environment. Chromium contamination of soil and underground water due to industrial activities is a very serious problem in several parts of India covering Karnataka, Tamil Nadu, Andhra Pradesh etc. Functionally modified Activated Carbon Fabrics (ACF) offer targeted chromium removal from drinking water and industrial effluents. Activated carbon fabric is a light weight adsorbing material with high surface area and low resistance to fluid flow. We have investigated surface modification of ACF using various acids in the laboratory through batch as well as through continuous flow column experiments with a view to develop the optimum conditions for chromium removal. Among the various acids investigated, phosphoric acid modified ACF gave best results with a removal efficiency of 95% under optimum conditions. Optimum pH was around 2 – 4 with 2 hours contact time. Continuous column experiments with an effective bed contact time (EBCT) of 5 minutes indicated that breakthrough occurred after 300 bed volumes. Adsorption data followed a Freundlich isotherm pattern. Nickel adsorbs preferentially and sulphate reduces chromium adsorption by 50%. The ACF could be regenerated up to 52.3% using 3 M NaOH under optimal conditions. The process is simple, economical, energy efficient and applicable to industrial effluents and drinking water.

Keywords: activated carbon fabric, hexavalent chromium, adsorption, drinking water

Procedia PDF Downloads 333
1869 Knowledge Creation Environment in the Iranian Universities: A Case Study

Authors: Mahdi Shaghaghi, Amir Ghaebi, Fariba Ahmadi

Abstract:

Purpose: The main purpose of the present research is to analyze the knowledge creation environment at a Iranian University (Alzahra University) as a typical University in Iran, using a combination of the i-System and Ba models. This study is necessary for understanding the determinants of knowledge creation at Alzahra University as a typical University in Iran. Methodology: To carry out the present research, which is an applied study in terms of purpose, a descriptive survey method was used. In this study, a combination of the i-System and Ba models has been used to analyze the knowledge creation environment at Alzahra University. i-System consists of 5 constructs including intervention (input), intelligence (process), involvement (process), imagination (process), and integration (output). The Ba environment has three pillars, namely the infrastructure, the agent, and the information. The integration of these two models resulted in 11 constructs which were as follows: intervention (input), infrastructure-intelligence, agent-intelligence, information-intelligence (process); infrastructure-involvement, agent-involvement, information-involvement (process); infrastructure-imagination, agent-imagination, information-imagination (process); and integration (output). These 11 constructs were incorporated into a 52-statement questionnaire and the validity and reliability of the questionnaire were examined and confirmed. The statistical population included the faculty members of Alzahra University (344 people). A total of 181 participants were selected through the stratified random sampling technique. The descriptive statistics, binomial test, regression analysis, and structural equation modeling (SEM) methods were also utilized to analyze the data. Findings: The research findings indicated that among the 11 research constructs, the levels of intervention, information-intelligence, infrastructure-involvement, and agent-imagination constructs were average and not acceptable. The levels of infrastructure-intelligence and information-imagination constructs ranged from average to low. The levels of agent-intelligence and information-involvement constructs were also completely average. The level of infrastructure-imagination construct was average to high and thus was considered acceptable. The levels of agent-involvement and integration constructs were above average and were in a highly acceptable condition. Furthermore, the regression analysis results indicated that only two constructs, viz. the information-imagination and agent-involvement constructs, positively and significantly correlate with the integration construct. The results of the structural equation modeling also revealed that the intervention, intelligence, and involvement constructs are related to the integration construct with the complete mediation of imagination. Discussion and conclusion: The present research suggests that knowledge creation at Alzahra University relatively complies with the combination of the i-System and Ba models. Unlike this model, the intervention, intelligence, and involvement constructs are not directly related to the integration construct and this seems to have three implications: 1) the information sources are not frequently used to assess and identify the research biases; 2) problem finding is probably of less concern at the end of studies and at the time of assessment and validation; 3) the involvement of others has a smaller role in the summarization, assessment, and validation of the research.

Keywords: i-System, Ba model , knowledge creation , knowledge management, knowledge creation environment, Iranian Universities

Procedia PDF Downloads 98
1868 Gender Gap in Returns to Social Entrepreneurship

Authors: Saul Estrin, Ute Stephan, Suncica Vujic

Abstract:

Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.

Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences

Procedia PDF Downloads 239
1867 Impact of Financial Factors on Total Factor Productivity: Evidence from Indian Manufacturing Sector

Authors: Lopamudra D. Satpathy, Bani Chatterjee, Jitendra Mahakud

Abstract:

The rapid economic growth in terms of output and investment necessitates a substantial growth of Total Factor Productivity (TFP) of firms which is an indicator of an economy’s technological change. The strong empirical relationship between financial sector development and economic growth clearly indicates that firms financing decisions do affect their levels of output via their investment decisions. Hence it establishes a linkage between the financial factors and productivity growth of the firms. To achieve the smooth and continuous economic growth over time, it is imperative to understand the financial channel that serves as one of the vital channels. The theoretical or logical argument behind this linkage is that when the internal financial capital is not sufficient enough for the investment, the firms always rely upon the external sources of finance. But due to the frictions and existence of information asymmetric behavior, it is always costlier for the firms to raise the external capital from the market, which in turn affect their investment sentiment and productivity. This kind of financial position of the firms puts heavy pressure on their productive activities. Keeping in view this theoretical background, the present study has tried to analyze the role of both external and internal financial factors (leverage, cash flow and liquidity) on the determination of total factor productivity of the firms of manufacturing industry and its sub-industries, maintaining a set of firm specific variables as control variables (size, age and disembodied technological intensity). An estimate of total factor productivity of the Indian manufacturing industry and sub-industries is computed using a semi-parametric approach, i.e., Levinsohn- Petrin method. It establishes the relationship between financial factors and productivity growth of 652 firms using a dynamic panel GMM method covering the time period between 1997-98 and 2012-13. From the econometric analyses, it has been found that the internal cash flow has a positive and significant impact on the productivity of overall manufacturing sector. The other financial factors like leverage and liquidity also play the significant role in the determination of total factor productivity of the Indian manufacturing sector. The significant role of internal cash flow on determination of firm-level productivity suggests that access to external finance is not available to Indian companies easily. Further, the negative impact of leverage on productivity could be due to the less developed bond market in India. These findings have certain implications for the policy makers to take various policy reforms to develop the external bond market and easily workout through which the financially constrained companies will be able to raise the financial capital in a cost-effective manner and would be able to influence their investments in the highly productive activities, which would help for the acceleration of economic growth.

Keywords: dynamic panel, financial factors, manufacturing sector, total factor productivity

Procedia PDF Downloads 328
1866 Temperature Fields in a Channel Partially-Filled by Porous Material with Internal Heat Generations: On Exact Solution

Authors: Yasser Mahmoudi, Nader Karimi

Abstract:

The present work examines analytically the effect internal heat generation on temperature fields in a channel partially-filled with a porous under local thermal non-equilibrium condition. The Darcy-Brinkman model is used to represent the fluid transport through the porous material. Two fundamental models (models A and B) represent the thermal boundary conditions at the interface between the porous medium and the clear region. The governing equations of the problem are manipulated, and for each interface model, exact solutions for the solid and fluid temperature fields are developed. These solutions incorporate the porous material thickness, Biot number, fluid to solid thermal conductivity ratio Darcy number, as the non-dimensional energy terms in fluid and solid as parameters. Results show that considering any of the two models and under zero or negative heat generation (heat sink) and for any Darcy number, an increase in the porous thickness increases the amount of heat flux transferred to the porous region. The obtained results are applicable to the analysis of complex porous media incorporating internal heat generation, such as heat transfer enhancement (THE), tumor ablation in biological tissues and porous radiant burners (PRBs).

Keywords: porous media, local thermal non-equilibrium, forced convection, heat transfer, exact solution, internal heat generation

Procedia PDF Downloads 458
1865 Computational Modeling of Heat Transfer from a Horizontal Array Cylinders for Low Reynolds Numbers

Authors: Ovais U. Khan, G. M. Arshed, S. A. Raza, H. Ali

Abstract:

A numerical model based on the computational fluid dynamics (CFD) approach is developed to investigate heat transfer across a longitudinal row of six circular cylinders. The momentum and energy equations are solved using the finite volume discretization technique. The convective terms are discretized using a second-order upwind methodology, whereas diffusion terms are discretized using a central differencing scheme. The second-order implicit technique is utilized to integrate time. Numerical simulations have been carried out for three different values of free stream Reynolds number (ReD) 100, 200, 300 and two different values of dimensionless longitudinal pitch ratio (SL/D) 1.5, 2.5 to demonstrate the fluid flow and heat transfer behavior. Numerical results are validated with the analytical findings reported in the literature and have been found to be in good agreement. The maximum percentage error in values of the average Nusselt number obtained from the numerical and analytical solutions is in the range of 10% for the free stream Reynolds number up to 300. It is demonstrated that the average Nusselt number for the array of cylinders increases with increasing the free stream Reynolds number and dimensionless longitudinal pitch ratio. The information generated would be useful in the design of more efficient heat exchangers or other fluid systems involving arrays of cylinders.

Keywords: computational fluid dynamics, array of cylinders, longitudinal pitch ratio, finite volume method, incompressible navier-stokes equations

Procedia PDF Downloads 80
1864 Quasiperiodic Magnetic Chains as Spin Filters

Authors: Arunava Chakrabarti

Abstract:

A one-dimensional chain of magnetic atoms, representative of a quantum gas in an artificial quasi-periodic potential and modeled by the well-known Aubry-Andre function and its variants are studied in respect of its capability of working as a spin filter for arbitrary spins. The basic formulation is explained in terms of a perfectly periodic chain first, where it is shown that a definite correlation between the spin S of the incoming particles and the magnetic moment h of the substrate atoms can open up a gap in the energy spectrum. This is crucial for a spin filtering action. The simple one-dimensional chain is shown to be equivalent to a 2S+1 strand ladder network. This equivalence is exploited to work out the condition for the opening of gaps. The formulation is then applied for a one-dimensional chain with quasi-periodic variation in the site potentials, the magnetic moments and their orientations following an Aubry-Andre modulation and its variants. In addition, we show that a certain correlation between the system parameters can generate absolutely continuous bands in such systems populated by Bloch like extended wave functions only, signaling the possibility of a metal-insulator transition. This is a case of correlated disorder (a deterministic one), and the results provide a non-trivial variation to the famous Anderson localization problem. We have worked within a tight binding formalism and have presented explicit results for the spin half, spin one, three halves and spin five half particles incident on the magnetic chain to explain our scheme and the central results.

Keywords: Aubry-Andre model, correlated disorder, localization, spin filter

Procedia PDF Downloads 353
1863 Accidental U.S. Taxpayers Residing Abroad: Choosing between U.S. Citizenship or Keeping Their Local Investment Accounts

Authors: Marco Sewald

Abstract:

Due to the current enforcement of exterritorial U.S. legislation, up to 9 million U.S. (dual) citizens residing abroad are subject to U.S. double and surcharge taxation and at risk of losing access to otherwise basic financial services and investment opportunities abroad. The United States is the only OECD country that taxes non-resident citizens, lawful permanent residents and other non-resident aliens on their worldwide income, based on local U.S. tax laws. To enforce these policies the U.S. has implemented ‘saving clauses’ in all tax treaties and implemented several compliance provisions, including the Foreign Account Tax Compliance Act (FATCA), Qualified Intermediaries Agreements (QI) and Intergovernmental Agreements (IGA) addressing Foreign Financial Institutions (FFIs) to implement these provisions in foreign jurisdictions. This policy creates systematic cases of double and surcharge taxation. The increased enforcement of compliance rules is creating additional report burdens for U.S. persons abroad and FFIs accepting such U.S. persons as customers. FFIs in Europe react with a growing denial of specific financial services to this population. The numbers of U.S. citizens renouncing has dramatically increased in the last years. A case study is chosen as an appropriate methodology and research method, as being an empirical inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used. This evaluative approach is testing whether the combination of policies works in practice, or whether they are in accordance with desirable moral, political, economical aims, or may serve other causes. The research critically evaluates the financial and non-financial consequences and develops sufficient strategies. It further discusses these strategies to avoid the undesired consequences of exterritorial U.S. legislation. Three possible strategies are resulting from the use cases: (1) Duck and cover, (2) Pay U.S. double/surcharge taxes, tax preparing fees and accept imposed product limitations and (3) Renounce U.S. citizenship and pay possible exit taxes, tax preparing fees and the requested $2,350 fee to renounce. While the first strategy is unlawful and therefore unsuitable, the second strategy is only suitable if the U.S. citizen residing abroad is planning to move to the U.S. in the future. The last strategy is the only reasonable and lawful way provided by the U.S. to limit the exposure to U.S. double and surcharge taxation and the limitations on financial products. The results are believed to add a perspective to the current academic discourse regarding U.S. citizenship based taxation, currently dominated by U.S. scholars, while providing sufficient strategies for the affected population at the same time.

Keywords: citizenship based taxation, FATCA, FBAR, qualified intermediaries agreements, renounce U.S. citizenship

Procedia PDF Downloads 200
1862 The Rational Design of Original Anticancer Agents Using Computational Approach

Authors: Majid Farsadrooh, Mehran Feizi-Dehnayebi

Abstract:

Serum albumin is the most abundant protein that is present in the circulatory system of a wide variety of organisms. Although it is a significant macromolecule, it can contribute to osmotic blood pressure and also, plays a superior role in drug disposition and efficiency. Molecular docking simulation can improve in silico drug design and discovery procedures to propound a lead compound and develop it from the discovery step to the clinic. In this study, the molecular docking simulation was applied to select a lead molecule through an investigation of the interaction of the two anticancer drugs (Alitretinoin and Abemaciclib) with Human Serum Albumin (HSA). Then, a series of new compounds (a-e) were suggested using lead molecule modification. Density functional theory (DFT) including MEP map and HOMO-LUMO analysis were used for the newly proposed compounds to predict the reactivity zones on the molecules, stability, and chemical reactivity. DFT calculation illustrated that these new compounds were stable. The estimated binding free energy (ΔG) values for a-e compounds were obtained as -5.78, -5.81, -5.95, -5,98, and -6.11 kcal/mol, respectively. Finally, the pharmaceutical properties and toxicity of these new compounds were estimated through OSIRIS DataWarrior software. The results indicated no risk of tumorigenic, irritant, or reproductive effects and mutagenicity for compounds d and e. As a result, compounds d and e, could be selected for further study as potential therapeutic candidates. Moreover, employing molecular docking simulation with the prediction of pharmaceutical properties helps to discover new potential drug compounds.

Keywords: drug design, anticancer, computational studies, DFT analysis

Procedia PDF Downloads 70
1861 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year

Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans

Abstract:

Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.

Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality

Procedia PDF Downloads 308
1860 A Study of Anoxic - Oxic Microbiological Technology for Treatment of Heavy Oily Refinery Wastewater

Authors: Di Wang, Li Fang, Shengyu Fang, Jianhua Li, Honghong Dong, Zhongzhi Zhang

Abstract:

Heavy oily refinery wastewater with the characteristics of high concentration of toxic organic pollutant, poor biodegradability and complicated dissolved recalcitrant compounds is intractable to be degraded. In order to reduce the concentrations of COD and total nitrogen pollutants which are the major pollutants in heavy oily refinery wastewater, the Anoxic - Oxic microbiological technology relies mainly on anaerobic microbial reactor which works with methanogenic archaea mainly that can convert organic pollutants to methane gas, and supplemented by aerobic treatment. The results of continuous operation for 2 months with a hydraulic retention time (HRT) of 60h showed that, the COD concentration from influent water of anaerobic reactor and effluent water from aerobic reactor were 547.8mg/L and 93.85mg/L, respectively. The total removal rate of COD was up to 84.9%. Compared with the 46.71mg/L of total nitrogen pollutants in influent water of anaerobic reactor, the concentration of effluent water of aerobic reactor decreased to 14.11mg/L. In addition, the average removal rate of total nitrogen pollutants reached as high as 69.8%. Based on the data displayed, Anoxic - Oxic microbial technology shows a great potential to dispose heavy oil sewage in energy saving and high-efficiency of biodegradation.

Keywords: anoxic - oxic microbiological technology, COD, heavy oily refinery wastewater, total nitrogen pollutant

Procedia PDF Downloads 486
1859 Electrochemical Sensor Based on Poly(Pyrogallol) for the Simultaneous Detection of Phenolic Compounds and Nitrite in Wastewater

Authors: Majid Farsadrooh, Najmeh Sabbaghi, Seyed Mohammad Mostashari, Abolhasan Moradi

Abstract:

Phenolic compounds are chief environmental contaminants on account of their hazardous and toxic nature on human health. The preparation of sensitive and potent chemosensors to monitor emerging pollution in water and effluent samples has received great consideration. A novel and versatile nanocomposite sensor based on poly pyrogallol is presented for the first time in this study, and its electrochemical behavior for simultaneous detection of hydroquinone (HQ), catechol (CT), and resorcinol (RS) in the presence of nitrite is evaluated. The physicochemical characteristics of the fabricated nanocomposite were investigated by emission-scanning electron microscopy (FE-SEM), energy-dispersive X-ray spectroscopy (EDS), and Brunauer-Emmett-Teller (BET). The electrochemical response of the proposed sensor to the detection of HQ, CT, RS, and nitrite is studied using cyclic voltammetry (CV), chronoamperometry (CA), differential pulse voltammetry (DPV), and electrochemical impedance spectroscopy (EIS). The kinetic characterization of the prepared sensor showed that both adsorption and diffusion processes can control reactions at the electrode. In the optimized conditions, the new chemosensor provides a wide linear range of 0.5-236.3, 0.8-236.3, 0.9-236.3, and 1.2-236.3 μM with a low limit of detection of 21.1, 51.4, 98.9, and 110.8 nM (S/N = 3) for HQ, CT and RS, and nitrite, respectively. Remarkably, the electrochemical sensor has outstanding selectivity, repeatability, and stability and is successfully employed for the detection of RS, CT, HQ, and nitrite in real water samples with the recovery of 96.2%–102.4%, 97.8%-102.6%, 98.0%–102.4% and 98.4%–103.2% for RS, CT, HQ, and nitrite, respectively. These outcomes illustrate that poly pyrogallol is a promising candidate for effective electrochemical detection of dihydroxybenzene isomers in the presence of nitrite.

Keywords: electrochemical sensor, poly pyrogallol, phenolic compounds, simultaneous determination

Procedia PDF Downloads 64
1858 Controlling Deforestation in the Densely Populated Region of Central Java Province, Banjarnegara District, Indonesia

Authors: Guntur Bagus Pamungkas

Abstract:

As part of a tropical country that is normally rich in forest land areas, Indonesia has always been in the world's spotlight due to its significantly increasing process of deforestation. In one hand, it is related to the mainstay for maintaining the sustainability of the earth's ecosystem functions. On the other hand, they also cover the various potential sources of the global economy. Therefore, it can always be the target of different scale of investors to excessively exploit them. No wonder the emergence of disasters in various characteristics always comes up. In fact, the deforestation phenomenon does not only occur in various forest land areas in the main islands of Indonesia but also includes Java Island, the most densely populated areas in the world. This island only remains the forest land of about 9.8% of the total forest land in Indonesia due to its long history of it, especially in Central Java Province, the most densely populated area in Java. Again, not surprisingly, this province belongs to the area with the highest frequency of disasters because of it, landslides in particular. One of the areas that often experience it is Banjarnegara District, especially in mountainous areas that lies in the range from 1000 to 3000 meters above sea level, where the remains of land forest area can easyly still be found. Even among them still leaves less untouchable tropical rain forest whose area also covers part of a neighboring district, Pekalongan, which is considered to be the rest of the world's little paradise on Earth. The district's landscape is indeed beautiful, especially in the Dieng area, a major tourist destination in Central Java Province after Borobudur Temple. However, annually hazardous always threatens this district due to this landslide disaster. Even, there was a tragic event that was buried with its inhabitants a few decades ago. This research aims to find part of the concept of effective forest management through monitoring the presence of remaining forest areas in this area. The research implemented monitoring of deforestation rates using the Stochastic Cellular Automata-Markov Chain (SCA-MC) method, which serves to provide a spatial simulation of land use and cover changes (LULCC). This geospatial process uses the Landsat-8 OLI image product with Thermal Infra-Red Sensors (TIRS) Band 10 in 2020 and Landsat 5 TM with TIRS Band 6 in 2010. Then it is also integrated with physical and social geography issues using the QGIS 2.18.11 application with the Mollusce Plugin, which serves to clarify and calculate the area of land use and cover, especially in forest areas—using the LULCC method, which calculates the rate of forest area reduction in 2010-2020 in Banjarnegara District. Since the dependence of this area on the use of forest land is quite high, concepts and preventive actions are needed, such as rehabilitation and reforestation of critical lands through providing proper monitoring and targeted forest management to restore its ecosystem in the future.

Keywords: deforestation, populous area, LULCC method, proper control and effective forest management

Procedia PDF Downloads 130
1857 Helicopter Exhaust Gases Cooler in Terms of Computational Fluid Dynamics (CFD) Analysis

Authors: Mateusz Paszko, Ksenia Siadkowska

Abstract:

Due to the low-altitude and relatively low-speed flight, helicopters are easy targets for actual combat assets e.g. infrared-guided missiles. Current techniques aim to increase the combat effectiveness of the military helicopters. Protection of the helicopter in flight from early detection, tracking and finally destruction can be realized in many ways. One of them is cooling hot exhaust gasses, emitting from the engines to the atmosphere in special heat exchangers. Nowadays, this process is realized in ejective coolers, where strong heat and momentum exchange between hot exhaust gases and cold air ejected from atmosphere takes place. Flow effects of air, exhaust gases; mixture of those two and the heat transfer between cold air and hot exhaust gases are given by differential equations of: Mass transportation–flow continuity, ejection of cold air through expanding exhaust gasses, conservation of momentum, energy and physical relationship equations. Calculation of those processes in ejective cooler by means of classic mathematical analysis is extremely hard or even impossible. Because of this, it is necessary to apply the numeric approach with modern, numeric computer programs. The paper discussed the general usability of the Computational Fluid Dynamics (CFD) in a process of projecting the ejective exhaust gases cooler cooperating with helicopter turbine engine. In this work, the CFD calculations have been performed for ejective-based cooler cooperating with the PA W3 helicopter’s engines.

Keywords: aviation, CFD analysis, ejective-cooler, helicopter techniques

Procedia PDF Downloads 326
1856 Reflections of Nocturnal Librarian: Attaining a Work-Life Balance in a Mega-City of Lagos State Nigeria

Authors: Oluwole Durodolu

Abstract:

The rationale for this study is to explore the adaptive strategy that librarians adopt in performing night shifts in a mega-city like Lagos state. Maslach Burnout Theory would be used to measure the three proportions of burnout in understanding emotional exhaustion, depersonalisation, and individual accomplishment to scrutinise job-related burnout syndrome allied with longstanding, unsolved stress. The qualitative methodology guided by a phenomenological research paradigm, which is an approach that focuses on the commonality of real-life experience in a particular group, would be used, focus group discussion adopted as a method of data collection from library staff who are involved in night-shift. The participant for the focus group discussion would be selected using a convenience sampling technique in which staff at the cataloguing unit would be included in the sample because of the representative characteristics of the unit. This would be done to enable readers to understand phenomena as it is reasonable than from a remote perspective. The exploratory interviews which will be in focus group method to shed light on issues relating to security, housing, transportation, budgeting, energy supply, employee duties, time management, information access, and sustaining professional levels of service and how all these variables affect the productivity of all the 149 library staff and their work-life balance.

Keywords: nightshift, work-life balance, mega-city, academic library, Maslach Burnout Theory, Lagos State, University of Lagos

Procedia PDF Downloads 128
1855 Mitigation of Interference in Satellite Communications Systems via a Cross-Layer Coding Technique

Authors: Mario A. Blanco, Nicholas Burkhardt

Abstract:

An important problem in satellite communication systems which operate in the Ka and EHF frequency bands consists of the overall degradation in link performance of mobile terminals due to various types of degradations in the link/channel, such as fading, blockage of the link to the satellite (especially in urban environments), intentional as well as other types of interference, etc. In this paper, we focus primarily on the interference problem, and we develop a very efficient and cost-effective solution based on the use of fountain codes. We first introduce a satellite communications (SATCOM) terminal uplink interference channel model that is classically used against communication systems that use spread-spectrum waveforms. We then consider the use of fountain codes, with focus on Raptor codes, as our main mitigation technique to combat the degradation in link/receiver performance due to the interference signal. The performance of the receiver is obtained in terms of average probability of bit and message error rate as a function of bit energy-to-noise density ratio, Eb/N0, and other parameters of interest, via a combination of analysis and computer simulations, and we show that the use of fountain codes is extremely effective in overcoming the effects of intentional interference on the performance of the receiver and associated communication links. We then show this technique can be extended to mitigate other types of SATCOM channel degradations, such as those caused by channel fading, shadowing, and hard-blockage of the uplink signal.

Keywords: SATCOM, interference mitigation, fountain codes, turbo codes, cross-layer

Procedia PDF Downloads 356
1854 Risk Analysis in Off-Site Construction Manufacturing in Small to Medium-Sized Projects

Authors: Atousa Khodadadyan, Ali Rostami

Abstract:

The objective of off-site construction manufacturing is to utilise the workforce and machinery in a controlled environment without external interference for higher productivity and quality. The usage of prefabricated components can save up to 14% of the total energy consumption in comparison with the equivalent number of cast-in-place ones. Despite the benefits of prefabrication construction, its current project practices encompass technical and managerial issues. Building design, precast components’ production, logistics, and prefabrication installation processes are still mostly discontinued and fragmented. Furthermore, collaboration among prefabrication manufacturers, transportation parties, and on-site assemblers rely on real-time information such as the status of precast components, delivery progress, and the location of components. From the technical point of view, in this industry, geometric variability is still prevalent, which can be caused during the transportation or production of components. These issues indicate that there are still many aspects of prefabricated construction that can be developed using disruptive technologies. Practical real-time risk analysis can be used to address these issues as well as the management of safety, quality, and construction environment issues. On the other hand, the lack of research about risk assessment and the absence of standards and tools hinder risk management modeling in prefabricated construction. It is essential to note that no risk management standard has been established explicitly for prefabricated construction projects, and most software packages do not provide tailor-made functions for this type of projects.

Keywords: project risk management, risk analysis, risk modelling, prefabricated construction projects

Procedia PDF Downloads 168
1853 Synthesis and Characterization of Carboxymethyl Cellulose-Chitosan Based Composite Hydrogels for Biomedical and Non-Biomedical Applications

Authors: K. Uyanga, W. Daoud

Abstract:

Hydrogels have attracted much academic and industrial attention due to their unique properties and potential biomedical and non-biomedical applications. Limitations on extending their applications have resulted from the synthesis of hydrogels using toxic materials and complex irreproducible processing techniques. In order to promote environmental sustainability, hydrogel efficiency, and wider application, this study focused on the synthesis of composite hydrogels matrices from an edible non-toxic crosslinker-citric acid (CA) using a simple low energy processing method based on carboxymethyl cellulose (CMC) and chitosan (CSN) natural polymers. Composite hydrogels were developed by chemical crosslinking. The results demonstrated that CMC:2CSN:CA exhibited good performance properties and super-absorbency 21× its original weight. This makes it promising for biomedical applications such as chronic wound healing and regeneration, next generation skin substitute, in situ bone regeneration and cell delivery. On the other hand, CMC:CSN:CA exhibited durable well-structured internal network with minimum swelling degrees, water absorbency, excellent gel fraction, and infra-red reflectance. These properties make it a suitable composite hydrogel matrix for warming effect and controlled and efficient release of loaded materials. CMC:2CSN:CA and CMC:CSN:CA composite hydrogels developed also exhibited excellent chemical, morphological, and thermal properties.

Keywords: citric acid, fumaric acid, tartaric acid, zinc nitrate hexahydrate

Procedia PDF Downloads 144
1852 The Effect of Crack Size, Orientation and Number on the Elastic Modulus of a Cracked Body

Authors: Mark T. Hanson, Alan T. Varughese

Abstract:

Osteoporosis is a disease affecting bone quality which in turn can increase the risk of low energy fractures. Treatment of osteoporosis using Bisphosphonates has the beneficial effect of increasing bone mass while at the same time has been linked to the formation of atypical femoral fractures. This has led to the increased study of micro-fractures in bones of patients using Bisphosphonate treatment. One of the mechanics related issues which have been identified in this regard is the loss in stiffness of bones containing one or many micro-fractures. Different theories have been put forth using fracture mechanics to determine the effect of crack presence on elastic properties such as modulus. However, validation of these results in a deterministic way has not been forthcoming. The present analysis seeks to provide this deterministic evaluation of fracture’s effect on the elastic modulus. In particular, the effect of crack size, crack orientation and crack number on elastic modulus is investigated. In particular, the Finite Element method is used to explicitly determine the elastic modulus reduction caused by the presence of cracks in a representative volume element. Single cracks of various lengths and orientations are examined as well as cases of multiple cracks. Cracks in tension as well as under shear stress are considered. Although the focus is predominantly two-dimensional, some three-dimensional results are also presented. The results obtained show the explicit reduction in modulus caused by the parameters of crack size, orientation and number noted above. The present results allow the interpretation of the various theories which currently exist in the literature.

Keywords: cracks, elastic, fracture, modulus

Procedia PDF Downloads 107
1851 Photovoltaic Performance of AgInSe2-Conjugated Polymer Hybrid Systems

Authors: Dinesh Pathaka, Tomas Wagnera, J. M. Nunzib

Abstract:

We investigated blends of MdPVV.PCBM.AIS for photovoltaic application. AgInSe2 powder was synthesized by sealing and heating the stoichiometric constituents in evacuated quartz tube ampule. Fine grinded AIS powder was dispersed in MD-MOPVV and PCBM with and without surfactant. Different concentrations of these particles were suspended in the polymer solutions and spin casted onto ITO glass. Morphological studies have been performed by atomic force microscopy and optical microscopy. The blend layers were also investigated by various techniques like XRD, UV-VIS optical spectroscopy, AFM, PL, after a series of various optimizations with polymers/concentration/deposition/ suspension/surfactants etc. XRD investigation of blend layers shows clear evidence of AIS dispersion in polymers. Diode behavior and cell parameters also revealed it. Bulk heterojunction hybrid photovoltaic device Ag/MoO3/MdPVV.PCBM.AIS/ZnO/ITO was fabricated and tested with standard solar simulator and device characterization system. The best performance and photovoltaic parameters we obtained was an open-circuit voltage of about Voc 0.54 V and a photocurrent of Isc 117 micro A and an efficiency of 0.2 percent using a white light illumination intensity of 23 mW/cm2. Our results are encouraging for further research on the fourth generation inorganic organic hybrid bulk heterojunction photovoltaics for energy. More optimization with spinning rate/thickness/solvents/deposition rates for active layers etc. need to be explored for improved photovoltaic response of these bulk heterojunction devices.

Keywords: thin films, photovoltaic, hybrid systems, heterojunction

Procedia PDF Downloads 270
1850 The Practise of Hand Drawing as a Premier Form of Representation in Architectural Design Teaching: The Case of FAUP

Authors: Rafael Santos, Clara Pimenta Do Vale, Barbara Bogoni, Poul Henning Kirkegaard

Abstract:

In the last decades, the relevance of hand drawing has decreased in the scope of architectural education. However, some schools continue to recognize its decisive role, not only in the architectural design teaching, but in the whole of architectural training. With this paper it is intended to present the results of a research developed on the following problem: the practise of hand drawing as a premier form of representation in architectural design teaching. The research had as its object the educational model of the Faculty of Architecture of the University of Porto (FAUP) and was led by three main objectives: to identify the circumstance that promoted hand drawing as a form of representation in FAUP's model; to characterize the types of hand drawing and their role in that model; to determine the particularities of hand drawing as a premier form of representation in architectural design teaching. Methodologically, the research was conducted according to a qualitative embedded single-case study design. The object – i.e., the educational model – was approached in FAUP case considering its Context and three embedded unities of analysis: the educational Purposes, Principles and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is Assumed; the architectural design classes, expressing how the model is Achieved; and the students, expressing how the model is Acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal that the educational model of FAUP – following the model of the former Porto School – was largely due to the methodological foundations created with the hand drawing teaching-learning processes. In the absence of a culture of explicit theoretical elaboration or systematic research, hand drawing was the support for the continuity of the school, an expression of a unified thought about what should be the reflection and practice of architecture. As a form of representation, hand drawing plays a transversal role in the entire educational model, since its purposes are not limited to the conception of architectural design – it is also a means for perception, analysis and synthesis. Regarding the architectural design teaching, there seems to be an understanding of three complementary dimensions of didactics: the instrumental, methodological and propositional dimension. At FAUP, hand drawing is recognized as the common denominator among these dimensions, according to the idea of "globality of drawing". It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance and valorisation of FAUP’s model; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the hand drawing in architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.

Keywords: architectural design teaching, architectural education, forms of representation, hand drawing

Procedia PDF Downloads 127
1849 Algorithms for Run-Time Task Mapping in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. K. Singh, A. E. Benyamina, P. Boulet

Abstract:

Mapping parallelized tasks of applications onto these MPSoCs can be done either at design time (static) or at run-time (dynamic). Static mapping strategies find the best placement of tasks at design-time, and hence, these are not suitable for dynamic workload and seem incapable of runtime resource management. The number of tasks or applications executing in MPSoC platform can exceed the available resources, requiring efficient run-time mapping strategies to meet these constraints. This paper describes a new Spiral Dynamic Task Mapping heuristic for mapping applications onto NoC-based Heterogeneous MPSoC. This heuristic is based on packing strategy and routing Algorithm proposed also in this paper. Heuristic try to map the tasks of an application in a clustering region to reduce the communication overhead between the communicating tasks. The heuristic proposed in this paper attempts to map the tasks of an application that are most related to each other in a spiral manner and to find the best possible path load that minimizes the communication overhead. In this context, we have realized a simulation environment for experimental evaluations to map applications with varying number of tasks onto an 8x8 NoC-based Heterogeneous MPSoCs platform, we demonstrate that the new mapping heuristics with the new modified dijkstra routing algorithm proposed are capable of reducing the total execution time and energy consumption of applications when compared to state-of-the-art run-time mapping heuristics reported in the literature.

Keywords: multiprocessor system on chip, MPSoC, network on chip, NoC, heterogeneous architectures, run-time mapping heuristics, routing algorithm

Procedia PDF Downloads 485
1848 Partisan Agenda Setting in Digital Media World

Authors: Hai L. Tran

Abstract:

Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.

Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization

Procedia PDF Downloads 54
1847 Determination of Direct Solar Radiation Using Atmospheric Physics Models

Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote

Abstract:

This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.

Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition

Procedia PDF Downloads 403
1846 The Effect of Power of Isolation Transformer on the Lamps in Airfield Ground Lighting Systems

Authors: Hossein Edrisi

Abstract:

To study the impact of the amount and volume of power of isolation transformer on the lamps in airfield Ground Lighting Systems. A test was conducted in Persian Gulf International Airport, This airport is situated in the south of Iran and it is one of the most cutting-edge airports, the same one that owns modern devices. Iran uses materials and auxiliary equipment which are made by ADB Company from Belgium. Airfield ground lighting (AGL) systems are responsible for providing visual issue to aircrafts and helicopters in the runways. In an AGL system a great deal of lamps are connected in serial circuits to each other and each ring has its individual constant current regulators (CCR), which through that provide energy to the lamps. Control of lamps is crucial for maintenance and operation in the AGL systems. Thanks to the Programmable Logic Controller (PLC) that is a cutting-edge technology can help the system to connect the elements from substations and ATC (TOWER). For this purpose, a test in real conditions of the airport done for all element that used in the airport such as isolation transformer in different power capacity and different consuming power and brightness of the lamps. The data were analyzed with Lux meter and Multimeter. The results had shown that the increase in the power of transformer caused a significant increase in brightness. According to the Ohm’s law and voltage division, without changing the characteristics of the light bulb, it is not possible to change the voltage, just need to change the amount of transformer with which it connects to the lamps. When the voltage is increased, the current through the bulb has to increase as well, because of Ohm's law: I=V/R and I=V/R which means that if V increases, so do I increase. The output voltage on the constant current regulator emerges between the lamps and the transformers.

Keywords: AGL, CCR, lamps, transformer, Ohm’s law

Procedia PDF Downloads 243
1845 Determining Factors of Suspended Glass Systems with Pre-Stress Cable Truss

Authors: Cemil Atakara, Hüseyin Eryaman

Abstract:

The use of glass as an envelope of a building has been increasing in the twentieth century. For more transparency and dematerialization new glass facade types have emerged in the past two decades which depends on point fixed glazing system (PFGS). The aim of this study is to analyze of the PFGS systems which are used on the glass curtain wall according to their types, degree, architectural and structural effects. This new system is desired because it enhances the transparency of the façade and it minimizes the component of the frames or of the profiles. This PFGS led to new structural elements which use cables, rods, trusses when designing a glass building facades, this structural element called the suspended glass system with pre-stressed cable truss (SGSPCT) which has been used for the first time in 1980 in Serres building. The twenty glass buildings which are designed in different systems have been analyzed during this study. After these analyses five selected SGSPCT building analyzed deeply and one skeletal frame building selected from Lefkosa redesigned according to the analysis results. These selected buildings have been included of various cable-truss system typologies and degree. The methodology of this study is building analysis method and literature survey with the help of books, articles, magazines, drawings, internet sources and applied connection details of the glass buildings. The selected five glass buildings and case building have been detailed analyzed with their architectural drawings, photographs and details. A gridshell structure can be compared with a shell structure; it consists of discrete members connecting nodal points. As these nodal points lie on the surface of an imaginary shell, their shapes function almost identically. Difference between shell and gridshell structures can be found in the fact that, due to their free-form and thus, due to the presence of bending forces, gridshells are required to resist loading through their cross-section. This research is divided into parts. A general study about the glass building and cable-glass and grid shell system will be done in the first chapters. Structural analyses and detailed analyses with schematic drawings with the plans, sections of the selected buildings will be explained in the second part. The third part it consists of the advantages and disadvantages of the use of the SGSPCT and Grid Shell in architecture. The study consists of four chapters including the introduction chapter. The general information of the SGSPCT and glazing system has been mentioned in the first chapter. Structural features, typologies, transparency principle and analytical information on systems have been explained of the selected buildings in the second chapter. The detailed analyses of case building have been done according to their schematic drawings with the plans, sections in the third chapter. After third chapter SGSPCT discussed on to the case building and selected buildings. SGSPCT systems have been compared with their advantages and disadvantages to the other systems. Advantages of cable-truss systems and SGSPCT have been concluded that the use of glass substrates in the last chapter.

Keywords: cable truss, glass, grid shell, transparency

Procedia PDF Downloads 408