Search results for: large pipes
774 Queer Anti-Urbanism: An Exploration of Queer Space Through Design
Authors: William Creighton, Jan Smitheram
Abstract:
Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?Keywords: queer, queer anti-urbanism, design as research, design
Procedia PDF Downloads 176773 Development Project, Land Acquisition and Rehabilitation: A Study of Navi Mumbai International Airport Project, India
Authors: Rahul Rajak, Archana Kumari Roy
Abstract:
Purpose: Development brings about structural change in the society. It is essential for socio-economic progress of the society, but it also causes pain to the people who are forced to displace from their motherland. Most of the people who are displaced due to development are poor and tribes. Development and displacement are interlinked with each other in the sense development sometimes leads to displacement of people. These studies mainly focus on socio-economic profile of villages and villagers likely to be affected by the Airport Project and they examine the issues of compensation and people’s level of satisfaction. Methodology: The study is based on Descriptive design; it is basically observational and correlation study. Primary data is used in this study. Considering the time and resource constrains, 100 people were interviewed covering socio-economic and demographic diversities from 6 out of 10 affected villages. Due to Navi Mumbai International Airport Project ten villages have to be displaced. Out of ten villages, this study is based on only six villages. These are Ulwe, Ganeshpuri, Targhar Komberbuje, Chincpada and Kopar. All six villages situated in Raigarh district under the Taluka Panvel in Maharashtra. Findings: It is revealed from the survey that there are three main castes of affected villages that are Agri, Koli, and Kradi. Entire village population of migrated person is very negligible. All three caste have main occupation are agricultural and fishing activities. People’s perception revealed that due to the establishment of the airport project, they may have more opportunities and scope of development rather than the adverse effect, but vigorously leave a motherland is psychological effect of the villagers. Research Limitation: This study is based on only six villages, the scenario of the entire ten affected villages is not explained by this research. Practical implication: The scenario of displacement and resettlement signifies more than a mere physical relocation. Compensation is not only hope for villagers, is it only give short time relief. There is a need to evolve institutions to protect and strengthen the right of Individuals. The development induced displacement exposed them to a new reality, the reality of their legality and illegality of stay on the land which belongs to the state. Originality: Mumbai has large population and high industrialized city have put land at the center of any policy implication. This paper demonstrates through the actual picture gathered from the field that how seriously the affected people suffered and are still suffering because of the land acquisition for the Navi Mumbai International Airport Project. The whole picture arise the question which is how long the government can deny the rights to farmers and agricultural laborers and remain unwilling to establish the balance between democracy and development.Keywords: compensation, displacement, land acquisition, project affected person (PAPs), rehabilitation
Procedia PDF Downloads 317772 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies
Authors: Pragati Sirohi, Vivek Singh Rana
Abstract:
Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.Keywords: brand value, FMCG, market capitalization, net worth
Procedia PDF Downloads 356771 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 34770 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen
Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev
Abstract:
The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms
Procedia PDF Downloads 90769 The Impact of Tourism on the Intangible Cultural Heritage of Pilgrim Routes: The Case of El Camino de Santiago
Authors: Miguel Angel Calvo Salve
Abstract:
This qualitative and quantitative study will identify the impact of tourism pressure on the intangible cultural heritage of the pilgrim route of El Camino de Santiago (Saint James Way) and propose an approach to a sustainable touristic model for these Cultural Routes. Since 1993, the Spanish Section of the Pilgrim Route of El Camino de Santiago has been on the World Heritage List. In 1994, the International Committee on Cultural Routes (CIIC-ICOMOS) initiated its work with the goal of studying, preserving, and promoting the cultural routes and their significance as a whole. Another ICOMOS group, the Charter on Cultural Routes, pointed out in 2008 the importance of both tangible and intangible heritage and the need for a holistic vision in preserving these important cultural assets. Tangible elements provide a physical confirmation of the existence of these cultural routes, while the intangible elements serve to give sense and meaning to it as a whole. Intangible assets of a Cultural Route are key to understanding the route's significance and its associated heritage values. Like many pilgrim routes, the Route to Santiago, as the result of a long evolutionary process, exhibits and is supported by intangible assets, including hospitality, cultural and religious expressions, music, literature, and artisanal trade, among others. A large increase in pilgrims walking the route, with very different aims and tourism pressure, has shown how the dynamic links between the intangible cultural heritage and the local inhabitants along El Camino are fragile and vulnerable. Economic benefits for the communities and population along the cultural routes are commonly fundamental for the micro-economies of the people living there, substituting traditional productive activities, which, in fact, modifies and has an impact on the surrounding environment and the route itself. Consumption of heritage is one of the major issues of sustainable preservation promoted with the intention of revitalizing those sites and places. The adaptation of local communities to new conditions aimed at preserving and protecting existing heritage has had a significant impact on immaterial inheritance. Based on questionnaires to pilgrims, tourists and local communities along El Camino during the peak season of the year, and using official statistics from the Galician Pilgrim’s Office, this study will identify the risk and threats to El Camino de Santiago as a Cultural Route. The threats visible nowadays due to the impact of mass tourism include transformations of tangible heritage, consumerism of the intangible, changes of local activities, loss in the authenticity of symbols and spiritual significance, and pilgrimage transformed into a tourism ‘product’, among others. The study will also approach some measures and solutions to mitigate those impacts and better preserve this type of cultural heritage. Therefore, this study will help the Route services providers and policymakers to better preserve the Cultural Route as a whole to ultimately improve the satisfying experience of pilgrims.Keywords: cultural routes, El Camino de Santiago, impact of tourism, intangible heritage
Procedia PDF Downloads 83768 The Role of Islamic Finance and Socioeconomic Factors in Financial Inclusion: A Cross Country Comparison
Authors: Allya Koesoema, Arni Ariani
Abstract:
While religion is only a very minor factor contributing to financial exclusion in most countries, the World Bank 2014 Global Financial Development Report highlighted it as a significant barrier for having a financial account in some Muslim majority countries. This is in part due to the perceived incompatibility between traditional financial institutions practices and Islamic finance principles. In these cases, the development of financial institutions and products that are compatible with the principles of Islamic finance may act as an important lever to increasing formal account ownership. However, there is significant diversity in the relationship between a country’s proportion of Muslim population and its level of financial inclusion. This paper combines data taken from the Global Findex Database, World Development Indicators, and the Pew Research Center to quantitatively explore the relationship between individual and country level religious and socioeconomic factor to financial inclusion. Results from regression analyses show a complex relationship between financial inclusion and religion-related factors in the population both on the individual and country level. Consistent with prior literature, on average the percentage of Islamic population positively correlates with the proportion of unbanked populations who cites religious reasons as a barrier to getting an account. However, its impact varies across several variables. First, a deeper look into countries’ religious composition reveals that the average negative impact of a large Muslim population is not as strong in more religiously diverse countries and less religious countries. Second, on the individual level, among the unbanked, the poorest quintile, least educated, older and the female populations are comparatively more likely to not have an account because of religious reason. Results also show indications that in this case, informal mechanisms partially substitute formal financial inclusion, as indicated by the propensity to borrow from family and friends. The individual level findings are important because the demographic groups that are more likely to cite religious reasons as barriers to formal financial inclusion are also generally perceived to be more vulnerable socially and economically and may need targeted attention. Finally, the number of Islamic financial institutions in a particular country is negatively correlated to the propensity of religious reasons as a barrier to financial inclusion. Importantly, the number of financial institutions in a country also mitigates the negative impact of the proportion of Muslim population, low education and individual age to formal financial inclusion. These results point to the potential importance of Islamic Finance Institutions in increasing global financial inclusion, and highlight the potential importance of looking beyond the proportion of Muslim population to other underlying institutional and socioeconomic factor in maximizing its impact.Keywords: cross country comparison, financial inclusion, Islamic banking and finance, quantitative methods, socioeconomic factors
Procedia PDF Downloads 192767 Mega Sporting Events and Branding: Marketing Implications for the Host Country’s Image
Authors: Scott Wysong
Abstract:
Qatar will spend billions of dollars to host the 2022 World Cup. While football fans around the globe get excited to cheer on their favorite team every four years, critics debate the merits of a country hosting such an expensive and large-scale event. That is, the host countries spend billions of dollars on stadiums and infrastructure to attract these mega sporting events with the hope of equitable returns in economic impact and creating jobs. Yet, in many cases, the host countries are left in debt with decaying venues. There are benefits beyond the economic impact of hosting mega-events. For example, citizens are often proud of their city/country to host these famous events. Yet, often overlooked in the literature is the proposition that serving as the host for a mega-event may enhance the country’s brand image, not only as a tourist destination but for the products made in that country of origin. This research aims to explore this phenomenon by taking an exploratory look at consumer perceptions of three host countries of a mega-event in sports. In 2014, the U.S., Chinese and Finn (Finland) consumer attitudes toward Brazil and its products were measured before and after the World Cup via surveys (n=89). An Analysis of Variance (ANOVA) revealed that there were no statistically significant differences in the pre-and post-World Cup perceptions of Brazil’s brand personality or country-of-origin image. After the World Cup in 2018, qualitative interviews were held with U.S. sports fans (n=17) in an effort to further explore consumer perceptions of products made in the host country: Russia. A consistent theme of distrust and corruption with Russian products emerged despite their hosting of this prestigious global event. In late 2021, U.S. football (soccer) fans (n=42) and non-fans (n=37) were surveyed about the upcoming 2022 World Cup. A regression analysis revealed that how much an individual indicated that they were a soccer fan did not significantly influence their desire to visit Qatar or try products from Qatar in the future even though the country was hosting the World Cup—in the end, hosting a mega-event as grand as the World Cup showcases the country to the world. However, it seems to have little impact on consumer perceptions of the country, as a whole, or its brands. That is, the World Cup appeared to enhance already pre-existing stereotypes about Brazil (e.g., beaches, partying and fun, yet with crime and poverty), Russia (e.g., cold weather, vodka and business corruption) and Qatar (desert and oil). Moreover, across all three countries, respondents could rarely name a brand from the host country. Because mega-events cost a lot of time and money, countries need to do more to market their country and its brands when hosting. In addition, these countries would be wise to measure the impact of the event from different perspectives. Hence, we put forth a comprehensive future research agenda to further the understanding of how countries, and their brands, can benefit from hosting a mega sporting event.Keywords: branding, country-of-origin effects, mega sporting events, return on investment
Procedia PDF Downloads 281766 The Effect of Fish and Krill Oil on Warfarin Control
Authors: Rebecca Pryce, Nijole Bernaitis, Andrew K. Davey, Shailendra Anoopkumar-Dukie
Abstract:
Background: Warfarin is an oral anticoagulant widely used in the prevention of strokes in patients with atrial fibrillation (AF) and in the treatment and prevention of deep vein thrombosis (DVT). Regular monitoring of Internationalised Normalised Ratio (INR) is required to ensure therapeutic benefit with time in therapeutic range (TTR) used to measure warfarin control. A number of factors influence TTR including diet, concurrent illness, and drug interactions. Extensive literature exists regarding the effect of conventional medicines on warfarin control, but documented interactions relating to complementary medicines are limited. It has been postulated that fish oil and krill oil supplementation may affect warfarin due to their association with bleeding events. However, to date little is known as to whether fish and krill oil significantly alter the incidence of bleeding with warfarin or impact on warfarin control. Aim:To assess the influence of fish oil and krill oil supplementation on warfarin control in AF and DVT patients by determining the influence of these supplements on TTR and bleeding events. Methods:A retrospective cohort analysis was conducted utilising patient information from a large private pathology practice in Queensland. AF and DVT patients receiving warfarin management by the pathology practice were identified and their TTR calculated using the Rosendaal method. Concurrent medications were analysed and patients taking no other interacting medicines were identified and divided according to users of fish oil and krill oil supplements and those taking no supplements. Study variables included TTR and the incidence of bleeding with exclusion criteria being less than 30 days of treatment with warfarin. Subject characteristics were reported as the mean and standard deviation for continuous data and number and percentages for nominal or categorical data. Data was analysed using GraphPad InStat Version 3 with a p value of <0.05 considered to be statistically significant. Results:Of the 2081 patients assessed for inclusion into this study, a total of 573 warfarin users met the inclusion criteria. Of these, 416 (72.6%) patients were AF patients and 157 (27.4%) DVT patients and overall there were 316 (55.1%) male and 257 (44.9%) female patients. 145 patients were included in the fish oil/krill oil group (supplement) and 428 were included in the control group. The mean TTR of supplement users was 86.9% and for the control group 84.7% with no significant difference between these groups. Control patients experienced 1.6 times the number of minor bleeds per person compared to supplement patients and 1.2 times the number of major bleeds per person. However, this was not statistically significant nor was the comparison between thrombotic events. Conclusion: No significant difference was found between supplement and control patients in terms of mean TTR, the number of bleeds and thrombotic events. Fish oil and krill oil supplements when used concurrently with warfarin do not significantly affect warfarin control as measured by TTR and bleeding incidence.Keywords: atrial fibrillation, deep vein thormbosis, fish oil, krill oil, warfarin
Procedia PDF Downloads 305765 Clinical Validation of C-PDR Methodology for Accurate Non-Invasive Detection of Helicobacter pylori Infection
Authors: Suman Som, Abhijit Maity, Sunil B. Daschakraborty, Sujit Chaudhuri, Manik Pradhan
Abstract:
Background: Helicobacter pylori is a common and important human pathogen and the primary cause of peptic ulcer disease and gastric cancer. Currently H. pylori infection is detected by both invasive and non-invasive way but the diagnostic accuracy is not up to the mark. Aim: To set up an optimal diagnostic cut-off value of 13C-Urea Breath Test to detect H. pylori infection and evaluate a novel c-PDR methodology to overcome of inconclusive grey zone. Materials and Methods: All 83 subjects first underwent upper-gastrointestinal endoscopy followed by rapid urease test and histopathology and depending on these results; we classified 49 subjects as H. pylori positive and 34 negative. After an overnight, fast patients are taken 4 gm of citric acid in 200 ml water solution and 10 minute after ingestion of the test meal, a baseline exhaled breath sample was collected. Thereafter an oral dose of 75 mg 13C-Urea dissolved in 50 ml water was given and breath samples were collected upto 90 minute for 15 minute intervals and analysed by laser based high precisional cavity enhanced spectroscopy. Results: We studied the excretion kinetics of 13C isotope enrichment (expressed as δDOB13C ‰) of exhaled breath samples and found maximum enrichment around 30 minute of H. pylori positive patients, it is due to the acid mediated stimulated urease enzyme activity and maximum acidification happened within 30 minute but no such significant isotopic enrichment observed for H. pylori negative individuals. Using Receiver Operating Characteristic (ROC) curve an optimal diagnostic cut-off value, δDOB13C ‰ = 3.14 was determined at 30 minute exhibiting 89.16% accuracy. Now to overcome grey zone problem we explore percentage dose of 13C recovered per hour, i.e. 13C-PDR (%/hr) and cumulative percentage dose of 13C recovered, i.e. c-PDR (%) in exhaled breath samples for the present 13C-UBT. We further explored the diagnostic accuracy of 13C-UBT by constructing ROC curve using c-PDR (%) values and an optimal cut-off value was estimated to be c-PDR = 1.47 (%) at 60 minute, exhibiting 100 % diagnostic sensitivity , 100 % specificity and 100 % accuracy of 13C-UBT for detection of H. pylori infection. We also elucidate the gastric emptying process of present 13C-UBT for H. pylori positive patients. The maximal emptying rate found at 36 minute and half empting time of present 13C-UBT was found at 45 minute. Conclusions: The present study exhibiting the importance of c-PDR methodology to overcome of grey zone problem in 13C-UBT for accurate determination of infection without any risk of diagnostic errors and making it sufficiently robust and novel method for an accurate and fast non-invasive diagnosis of H. pylori infection for large scale screening purposes.Keywords: 13C-Urea breath test, c-PDR methodology, grey zone, Helicobacter pylori
Procedia PDF Downloads 301764 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level
Authors: Matthias Stange
Abstract:
Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry
Procedia PDF Downloads 73763 Mechanical Properties of Diamond Reinforced Ni Nanocomposite Coatings Made by Co-Electrodeposition with Glycine as Additive
Authors: Yanheng Zhang, Lu Feng, Yilan Kang, Donghui Fu, Qian Zhang, Qiu Li, Wei Qiu
Abstract:
Diamond-reinforced Ni matrix composite has been widely applied in engineering for coating large-area structural parts owing to its high hardness, good wear resistance and corrosion resistance compared with those features of pure nickel. The mechanical properties of Ni-diamond composite coating can be promoted by the high incorporation and uniform distribution of diamond particles in the nickel matrix, while the distribution features of particles are affected by electrodeposition process parameters, especially the additives in the plating bath. Glycine has been utilized as an organic additive during the preparation of pure nickel coating, which can effectively increase the coating hardness. Nevertheless, to author’s best knowledge, no research about the effects of glycine on the Ni-diamond co-deposition has been reported. In this work, the diamond reinforced Ni nanocomposite coatings were fabricated by a co-electrodeposition technique from a modified Watt’s type bath in the presence of glycine. After preparation, the SEM morphology of the composite coatings was observed combined with energy dispersive X-ray spectrometer, and the diamond incorporation was analyzed. The surface morphology and roughness were obtained by a three-dimensional profile instrument. 3D-Debye rings formed by XRD were analyzed to characterize the nickel grain size and orientation in the coatings. The average coating thickness was measured by a digital micrometer to deduce the deposition rate. The microhardness was tested by automatic microhardness tester. The friction coefficient and wear volume were measured by reciprocating wear tester to characterize the coating wear resistance and cutting performance. The experimental results confirmed that the presence of glycine effectively improved the surface morphology and roughness of the composite coatings. By optimizing the glycine concentration, the incorporation of diamond particles was increased, while the nickel grain size decreased with increasing glycine. The hardness of the composite coatings was increased as the glycine concentration increased. The friction and wear properties were evaluated as the glycine concentration was optimized, showing a decrease in the wear volume. The wear resistance of the composite coatings increased as the glycine content was increased to an optimum value, beyond which the wear resistance decreased. Glycine complexation contributed to the nickel grain refinement and improved the diamond dispersion in the coatings, both of which made a positive contribution to the amount and uniformity of embedded diamond particles, thus enhancing the microhardness, reducing the friction coefficient, and hence increasing the wear resistance of the composite coatings. Therefore, additive glycine can be used during the co-deposition process to improve the mechanical properties of protective coatings.Keywords: co-electrodeposition, glycine, mechanical properties, Ni-diamond nanocomposite coatings
Procedia PDF Downloads 125762 Rational Approach to Analysis and Construction of Curved Composite Box Girders in Bridges
Authors: Dongming Feng, Fangyin Zhang, Liling Cao
Abstract:
Horizontally curved steel-concrete composite box girders are extensively used in highway bridges. They consist of reinforced concrete deck on top of prefabricated steel box section beam which exhibits a high torsional rigidity to resist torsional effects induced by the curved structural geometry. This type of structural system is often constructed in two stages. The composite section will take the tension mainly by the steel box and, the compression by the concrete deck. The steel girders are delivered in large pre-fabricated U-shaped sections that are designed for ease of construction. They are then erected on site and overlaid by cast-in-place reinforced concrete deck. The functionality of the composite section is not achieved until the closed section is formed by fully cured concrete. Since this kind of composite section is built in two stages, the erection of the open steel box presents some challenges to contractors. When the reinforced concrete slab is cast-in-place, special care should be taken on bracings that can prevent the open U-shaped steel box from global and local buckling. In the case of multiple steel boxes, the design detailing should pay enough attention to the installation requirement of the bracings connecting adjacent steel boxes to prevent the global buckling. The slope in transverse direction and grade in longitudinal direction will result in some local deformation of the steel boxes that affect the connection of the bracings. During the design phase, it is common for engineers to model the curved composite box girder using one-dimensional beam elements. This is adequate to analyze the global behavior, however, it is unable to capture the local deformation which affects the installation of the field bracing connection. The presence of the local deformation may become a critical component to control the construction tolerance, and overlooking this deformation will produce inadequate structural details that eventually cause misalignment in field and erection failure. This paper will briefly describe the construction issues we encountered in real structures, investigate the difference between beam element modeling and shell/solid element modeling, and their impact on the different construction stages. P-delta effect due to the slope and curvature of the composite box girder is analyzed, and the secondary deformation is compared to the first-order response and evaluated for its impact on installation of lateral bracings. The paper will discuss the rational approach to prepare construction documents and recommendations are made on the communications between engineers, erectors, and fabricators to smooth out construction process.Keywords: buckling, curved composite box girder, stage construction, structural detailing
Procedia PDF Downloads 122761 Development of Loop Mediated Isothermal Amplification (Lamp) Assay for the Diagnosis of Ovine Theileriosis
Authors: Muhammad Fiaz Qamar, Uzma Mehreen, Muhammad Arfan Zaman, Kazim Ali
Abstract:
Ovine Theileriosis is a world-wide concern, especially in tropical and subtropical areas, due to having tick abundance that has received less awareness in different developed and developing areas due to less worth of sheep, low to the middle level of infection in different small ruminants herd. Across Asia, the prevalence reports have been conducted to provide equivalent calculation of flock and animal level prevalence of Theileriosisin animals. It is a challenge for veterinarians to timely diagnosis & control of Theileriosis and famers because of the nature of the organism and inadequacy of restricted plans to control. All most work is based upon the development of such a technique which should be farmer-friendly, less expensive, and easy to perform into the field. By the timely diagnosis of this disease will decrease the irrational use of the drugs, and other plan was to determine the prevalence of Theileriosis in District Jhang by using the conventional method, PCR and qPCR, and LAMP. We quantify the molecular epidemiology of T.lestoquardiin sheep from Jhang districts, Punjab, Pakistan. In this study, we concluded that the overall prevalence of Theileriosis was (32/350*100= 9.1%) in sheep by using Giemsa staining technique, whereas (48/350*100= 13%) is observed by using PCR technique (56/350*100=16%) in qPCR and the LAMP technique have shown up to this much prevalence percentage (60/350*100= 17.1%). The specificity and sensitivity also calculated in comparison with the PCR and LAMP technique. Means more positive results have been shown when the diagnosis has been done with the help of LAMP. And there is little bit of difference between the positive results of PCR and qPCR, and the least positive animals was by using Giemsa staining technique/conventional method. If we talk about the specificity and sensitivity of the LAMP as compared to PCR, The cross tabulation shows that the results of sensitivity of LAMP counted was 94.4%, and specificity of LAMP counted was 78%. Advances in scientific field must be upon reality based ideas which can lessen the gaps and hurdles in the way of scientific research; the lamp is one of such techniques which have done wonders in adding value and helping human at large. It is such a great biological diagnostic tools and has helped a lot in the proper diagnosis and treatment of certain diseases. Other methods for diagnosis, such as culture techniques and serological techniques, have exposed humans with great danger. However, with the help of molecular diagnostic technique like LAMP, exposure to such pathogens is being avoided in the current era Most prompt and tentative diagnosis can be made using LAMP. Other techniques like PCR has many disadvantages when compared to LAMP as PCR is a relatively expensive, time consuming, and very complicated procedure while LAMP is relatively cheap, easy to perform, less time consuming, and more accurate. LAMP technique has removed hurdles in the way of scientific research and molecular diagnostics, making it approachable to poor and developing countries.Keywords: distribution, thelaria, LAMP, primer sequences, PCR
Procedia PDF Downloads 103760 An Integrated Theoretical Framework on Mobile-Assisted Language Learning: User’s Acceptance Behavior
Authors: Gyoomi Kim, Jiyoung Bae
Abstract:
In the field of language education research, there are not many tries to empirically examine learners’ acceptance behavior and related factors of mobile-assisted language learning (MALL). This study is one of the few attempts to propose an integrated theoretical framework that explains MALL users’ acceptance behavior and potential factors. Constructs from technology acceptance model (TAM) and MALL research are tested in the integrated framework. Based on previous studies, a hypothetical model was developed. Four external variables related to the MALL user’s acceptance behavior were selected: subjective norm, content reliability, interactivity, self-regulation. The model was also composed of four other constructs: two latent variables, perceived ease of use and perceived usefulness, were considered as cognitive constructs; attitude toward MALL as an affective construct; behavioral intention to use MALL as a behavioral construct. The participants were 438 undergraduate students who enrolled in an intensive English program at one university in Korea. This particular program was held in January 2018 using the vacation period. The students were given eight hours of English classes each day from Monday to Friday for four weeks and asked to complete MALL courses for practice outside the classroom. Therefore, all participants experienced blended MALL environment. The instrument was a self-response questionnaire, and each construct was measured by five questions. Once the questionnaire was developed, it was distributed to the participants at the final ceremony of the intensive program in order to collect the data from a large number of the participants at a time. The data showed significant evidence to support the hypothetical model. The results confirmed through structural equation modeling analysis are as follows: First, four external variables such as subjective norm, content reliability, interactivity, and self-regulation significantly affected perceived ease of use. Second, subjective norm, content reliability, self-regulation, perceived ease of use significantly affected perceived usefulness. Third, perceived usefulness and perceived ease of use significantly affected attitude toward MALL. Fourth, attitude toward MALL and perceived usefulness significantly affected behavioral intention to use MALL. These results implied that the integrated framework from TAM and MALL could be useful when adopting MALL environment to university students or adult English learners. Key constructs except interactivity showed significant relationships with one another and had direct and indirect impacts on MALL user’s acceptance behavior. Therefore, the constructs and validated metrics is valuable for language researchers and educators who are interested in MALL.Keywords: blended MALL, learner factors/variables, mobile-assisted language learning, MALL, technology acceptance model, TAM, theoretical framework
Procedia PDF Downloads 238759 A Bayesian Approach for Health Workforce Planning in Portugal
Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro
Abstract:
Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning
Procedia PDF Downloads 252758 Secure Optimized Ingress Filtering in Future Internet Communication
Authors: Bander Alzahrani, Mohammed Alreshoodi
Abstract:
Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.Keywords: forwarding identifier, filling factor, information centric network, topology manager
Procedia PDF Downloads 154757 The Problems of Women over 65 with Incontinence Diagnosis: A Case Study in Turkey
Authors: Birsel Canan Demirbag, Kıymet Yesilcicek Calik, Hacer Kobya Bulut
Abstract:
Objective: This study was conducted to evaluate the problems of women over 65 with incontinence diagnosis. Methods: This descriptive study was conducted with women over 65 with incontinence diagnosis in four Family Health Centers in a city in Eastern Black Sea region between November 1, and December 20, 2015. 203, 107, 178, 180 women over 65 were registered in these centers and 262 had incontinence diagnosis at least once and had an ongoing complaint. 177 women were volunteers for the study. During home visits and using face-to-face survey methodology, participants were given socio-demographic characteristics survey, Sandvik severity scale, Incontinence Quality of Life Scale, Urogenital Distress Inventory and a questionnaire including challenges experienced due to incontinence developed by the researcher. Data were analyzed with SPSS program using percentages, numbers, Chi-square, Man-Whitney U and t test with 95% confidence interval and a significance level p <0.05. Findings: 67 ± 1.4 was the mean age, 2.05 ± 0.04 was parity, 44.5 ± 2.12 was menopause age, 66.3% were primary school graduates, 45.7% had deceased spouse, 44.4% lived in a large family, 67.2% had their own room, 77.8% had income, 89.2% could meet self- care, 73.2% had a diagnosis of mixed incontinence, 87.5% suffered for 6-20 years % 78.2 had diuretics, antidepressants and heart medicines, 20.5% had urinary fecal cases, 80.5% had bladder training at least once, 90.1% didn’t have bladder diary calendar/control training programs, 31.1% had hysterectomy for prolapse, 97.1'i% was treated with lower urinary tract infection at least once, 66.3% saw a doctor to get drug in the last three months, 76.2 could not go out alone, 99.2 % had at least one chronic disease, 87.6 % had constipation complain, 2.9% had chronic cough., 45.1% fell due to a sudden rise for toilet. Incontinence Impact Questionnaire Average score was (QOL) 54.3 ± 21.1, Sandvik score was 12.1 ± 2.5, Urogenital Distress Inventory was 47.7 ± 9.2. Difficulties experienced due to incontinence were 99.5% feeling of unhappiness, 67.1% constant feeling of urine smell due to failing to change briefs frequently, % 87.2 move away from social life, 89.7 unable to use pad, 99.2% feeling of disturbing households / other individuals, 87.5% feel dizziness/fall due to sudden rise, 87.4% feeling of others’ imperceptions about the situation, % 94.3 insomnia, 78.2 lack of assistance, 84.7% couldn’t afford urine protection briefs. Results: With this study, it was found out that there were a lot of unsolved issues at individual and community level affecting the life quality of women with incontinence. In accordance with this common problem in women, to facilitate daily life it is obvious that regular home care training programs at institutional level in our country will be effective.Keywords: health problems, incontinence, incontinence quality of life questionnaire, old age, urinary urogenital distress inventory, Sandviken severity, women
Procedia PDF Downloads 321756 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 285755 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test
Authors: Arvind Salem
Abstract:
Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.Keywords: commissions, elections, gerrymandering, redistricting
Procedia PDF Downloads 73754 Quantum Conductance Based Mechanical Sensors Fabricated with Closely Spaced Metallic Nanoparticle Arrays
Authors: Min Han, Di Wu, Lin Yuan, Fei Liu
Abstract:
Mechanical sensors have undergone a continuous evolution and have become an important part of many industries, ranging from manufacturing to process, chemicals, machinery, health-care, environmental monitoring, automotive, avionics, and household appliances. Concurrently, the microelectronics and microfabrication technology have provided us with the means of producing mechanical microsensors characterized by high sensitivity, small size, integrated electronics, on board calibration, and low cost. Here we report a new kind of mechanical sensors based on the quantum transport process of electrons in the closely spaced nanoparticle films covering a flexible polymer sheet. The nanoparticle films were fabricated by gas phase depositing of preformed metal nanoparticles with a controlled coverage on the electrodes. To amplify the conductance of the nanoparticle array, we fabricated silver interdigital electrodes on polyethylene terephthalate(PET) by mask evaporation deposition. The gaps of the electrodes ranged from 3 to 30μm. Metal nanoparticles were generated from a magnetron plasma gas aggregation cluster source and deposited on the interdigital electrodes. Closely spaced nanoparticle arrays with different coverage could be gained through real-time monitoring the conductance. In the film coulomb blockade and quantum, tunneling/hopping dominate the electronic conduction mechanism. The basic principle of the mechanical sensors relies on the mechanical deformation of the fabricated devices which are translated into electrical signals. Several kinds of sensing devices have been explored. As a strain sensor, the device showed a high sensitivity as well as a very wide dynamic range. A gauge factor as large as 100 or more was demonstrated, which can be at least one order of magnitude higher than that of the conventional metal foil gauges or even better than that of the semiconductor-based gauges with a workable maximum applied strain beyond 3%. And the strain sensors have a workable maximum applied strain larger than 3%. They provide the potential to be a new generation of strain sensors with performance superior to that of the currently existing strain sensors including metallic strain gauges and semiconductor strain gauges. When integrated into a pressure gauge, the devices demonstrated the ability to measure tiny pressure change as small as 20Pa near the atmospheric pressure. Quantitative vibration measurements were realized on a free-standing cantilever structure fabricated with closely-spaced nanoparticle array sensing element. What is more, the mechanical sensor elements can be easily scaled down, which is feasible for MEMS and NEMS applications.Keywords: gas phase deposition, mechanical sensors, metallic nanoparticle arrays, quantum conductance
Procedia PDF Downloads 274753 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂
Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang
Abstract:
CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces
Procedia PDF Downloads 278752 The Temporal Implications of Spatial Prospects
Authors: Zhuo Job Chen, Kevin Nute
Abstract:
The work reported examines potential linkages between spatial and temporal prospects, and more specifically, between variations in the spatial depth and foreground obstruction of window views, and observers’ sense of connection to the future. It was found that external views from indoor spaces were strongly associated with a sense of the future, that partially obstructing such a view with foreground objects significantly reduced its association with the future, and replacing it with a pictorial representation of the same scene (with no real actual depth) removed most of its temporal association. A lesser change in the spatial depth of the view, however, had no apparent effect on association with the future. While the role of spatial depth has still to be confirmed, the results suggest that spatial prospects directly affect temporal ones. The word “prospect” typifies the overlapping of the spatial and temporal in most human languages. It originated in classical times as a purely spatial term, but in the 16th century took on the additional temporal implication of an imagined view ahead, of the future. The psychological notion of prospection, then, has its distant origins in a spatial analogue. While it is not yet proven that space directly structures our processing of time at a physiological level, it is generally agreed that it commonly does so conceptually. The mental representation of possible futures has been a central part of human survival as a species (Boyer, 2008; Suddendorf & Corballis, 2007). A sense of the future seems critical not only practically, but also psychologically. It has been suggested, for example, that lack of a positive image of the future may be an important contributing cause of depression (Beck, 1974; Seligman, 2016). Most people in the developed world now spend more than 90% of their lives indoors. So any direct link between external views and temporal prospects could have important implications for both human well-being and building design. We found that the ability to see what lies in front of us spatially was strongly associated with a sense of what lies ahead temporally. Partial obstruction of a view was found to significantly reduce that sense connection to the future. Replacing a view with a flat pictorial representation of the same scene removed almost all of its connection with the future, but changing the spatial depth of a real view appeared to have no significant effect. While foreground obstructions were found to reduce subjects’ sense of connection to the future, they increased their sense of refuge and security. Consistent with Prospect and Refuge theory, an ideal environment, then, would seem to be one in which we can “see without being seen” (Lorenz, 1952), specifically one that conceals us frontally from others, without restricting our own view. It is suggested that these optimal conditions might be translated architecturally as screens, the apertures of which are large enough for a building occupant to see through unobstructed from close by, but small enough to conceal them from the view of someone looking from a distance outside.Keywords: foreground obstructions, prospection, spatial depth, window views
Procedia PDF Downloads 123751 Accumulated Gender-Diverse Co-signing Experience, Knowledge Sharing, and Audit Quality
Authors: Anxuan Xie, Chun-Chan Yu
Abstract:
Survey evidence provides support that auditors can gain professional knowledge not only from client firms but also from teammates they work with. Furthermore, given that knowledge is accumulated in nature, along with the reality that auditors today must work in an environment of increased diversity, whether the attributes of teammates will influence the effects of knowledge sharing and accumulation and ultimately influence an audit partner’s audit quality should be interesting research issues. We test whether the gender of co-signers will moderate the effect of a lead partner’s cooperative experiences on financial restatements. Furthermore, if the answer is “yes”, we further investigate the underlying reasons. We use data from Taiwan because, according to Taiwan’s law, engagement partners, who are basically two certificate public accountants from the same audit firm, are required to disclose (i.e., sign) their names in the audit report of public companies since 1983. Therefore, we can trace each engagement partner’s historic direct cooperative (co-signing) records and get large-sample data. We find that the benefits of knowledge sharing manifest primarily via co-signing audit reports with audit partners of different gender from the lead engagement partners, supporting the argument that in an audit setting, accumulated gender-diverse working relationship is positively associated with knowledge sharing, and therefore improve lead engagements’ audit quality. This study contributes to the extant literature in the following ways. First, we provide evidence that in the auditing setting, the experiences accumulated from cooperating with teammates of a different gender from the lead partner can improve audit quality. Given that most studies find evidence of negative effects of surface-level diversity on team performance, the results of this study support the prior literature that the association between diversity and knowledge sharing actually hinges on the context (e.g., organizational culture, task complexity) and “bridge” (a pre-existing commonality among team members that can smooth the process of diversity toward favorable results) among diversity team members. Second, this study also provides practical insights with respect to the audit firms’ policy of knowledge sharing and deployment of engagement partners. For example, for audit firms that appreciate the merits of knowledge sharing, the deployment of auditors of different gender within an audit team can help auditors accumulate audit-related knowledge, which will further benefit the future performance of those audit firms. Moreover, nowadays, client firms also attach importance to the diversity of their engagement partners. As their policy goals, lawmakers and regulators also continue to promote a gender-diverse working environment. The findings of this study indicate that for audit firms, gender diversity will not be just a means to cater to those groups. Third, for audit committees or other stakeholders, they can evaluate the quality of existing (or potential) lead partners by tracking their co-signing experiences, especially whether they have gender-diverse co-signing experiences.Keywords: co-signing experiences, audit quality, knowledge sharing, gender diversity
Procedia PDF Downloads 85750 An Overview on Micro Irrigation-Accelerating Growth of Indian Agriculture
Authors: Rohit Lall
Abstract:
The adoption of Micro Irrigation (MI) technologies in India has helped in achieving higher cropping and irrigation intensity with significant savings on resource savings such as labour, fertilizer and improved crop yields. These technologies have received considerable attention from policymakers, growers and researchers over the years for its perceived ability to contribute towards agricultural productivity and economic growth with the well-being of the growers of the country. Keeping the pace with untapped theoretical potential to cover government had launched flagship programs/centre sector schemes with earmarked budget to capture the potential under these waters saving techniques envisaged under these technologies by way of providing financial assistance to the beneficiaries for adopting these technologies. Micro Irrigation technologies have been in the special attention of the policymakers over the years. India being an agrarian economy having engaged 75% of the population directly or indirectly having skilled, semi-skilled and entrepreneurs in the sector with focused attention and financial allocations from the government under these technologies in covering the untapped potential under Pradhan Mantri Krishi Sinchayee Yojana (PMKSY) 'Per Drop More Crop component.' During the year 2004, a Taskforce on Micro Irrigation was constituted to estimate the potential of these technologies in India which was estimated 69.5 million hectares by the Task Force Report on MI however only 10.49 million hectares have been achieved so far. Technology collaborations by leading manufacturing companies in overseas have proved to a stepping stone in technology advancement and product up gradation with increased efficiencies. Joint ventures by the leading MI companies have added huge business volumes which have not only accelerated the momentum of achieving the desired goal but in terms of area coverage but had also generated opportunities for the polymer manufacturers in the country. To provide products matching the global standards Bureau of Indian Standards have constituted a sectional technical committee under the Food and Agriculture Department (FAD)-17 to formulated/devise and revise standards pertaining to MI technologies. The research lobby has also contributed at large by developing in-situ analysis proving MI technologies a boon for farming community of the country with resource conservation of which water is of paramount importance. Thus, Micro Irrigation technologies have proved to be the key tool for feeding the grueling demand of food basket of the growing population besides maintaining soil health and have been contributing towards doubling of farmers’ income.Keywords: task force on MI, standards, per drop more crop, doubling farmers’ income
Procedia PDF Downloads 117749 Features of Composites Application in Shipbuilding
Authors: Valerii Levshakov, Olga Fedorova
Abstract:
Specific features of ship structures, made from composites, i.e. simultaneous shaping of material and structure, large sizes, complicated outlines and tapered thickness have defined leading role of technology, integrating test results from material science, designing and structural analysis. Main procedures of composite shipbuilding are contact molding, vacuum molding and winding. Now, the most demanded composite shipbuilding technology is the manufacture of structures from fiberglass and multilayer hybrid composites by means of vacuum molding. This technology enables the manufacture of products with improved strength properties (in comparison with contact molding), reduction of production duration, weight and secures better environmental conditions in production area. Mechanized winding is applied for the manufacture of parts, shaped as rotary bodies – i.e. parts of ship, oil and other pipelines, deep-submergence vehicles hulls, bottles, reservoirs and other structures. This procedure involves processing of reinforcing fiberglass, carbon and polyaramide fibers. Polyaramide fibers have tensile strength of 5000 MPa, elastic modulus value of 130 MPa and rigidity of the same can be compared with rigidity of fiberglass, however, the weight of polyaramide fiber is 30% less than weight of fiberglass. The same enables to the manufacture different structures, including that, using both – fiberglass and organic composites. Organic composites are widely used for the manufacture of parts with size and weight limitations. High price of polyaramide fiber restricts the use of organic composites. Perspective area of winding technology development is the manufacture of carbon fiber shafts and couplings for ships. JSC ‘Shipbuilding & Shiprepair Technology Center’ (JSC SSTC) developed technology of dielectric uncouplers for cryogenic lines, cooled by gaseous or liquid cryogenic agents (helium, nitrogen, etc.) for temperature range 4.2-300 K and pressure up to 30 MPa – the same is used for separating components of electro physical equipment with different electrical potentials. Dielectric uncouplers were developed, the manufactured and tested in accordance with International Thermonuclear Experimental Reactor (ITER) Technical specification. Spiral uncouplers withstand operating voltage of 30 kV, direct-flow uncoupler – 4 kV. Application of spiral channel instead of rectilinear enables increasing of breakdown potential and reduction of uncouplers sizes. 95 uncouplers were successfully the manufactured and tested. At the present time, Russian the manufacturers of ship composite structures have started absorption of technology of manufacturing the same using automated prepreg laminating; this technology enables the manufacture of structures with improved operational specifications.Keywords: fiberglass, infusion, polymeric composites, winding
Procedia PDF Downloads 238748 SLAPP Suits: An Encroachment On Human Rights Of A Global Proportion And What Can Be Done About It
Authors: Laura Lee Prather
Abstract:
A functioning democracy is defined by various characteristics, including freedom of speech, equality, human rights, rule of law and many more. Lawsuits brought to intimidate speakers, drain the resources of community members, and silence journalists and others who speak out in support of matters of public concern are an abuse of the legal system and an encroachment of human rights. The impact can have a broad chilling effect, deterring others from speaking out against abuse. This article aims to suggest ways to address this form of judicial harassment. In 1988, University of Denver professors George Pring and Penelope Canan coined the term “SLAPP” when they brought to light a troubling trend of people getting sued for speaking out about matters of public concern. Their research demonstrated that thousands of people engaging in public debate and citizen involvement in government have been and will be the targets of multi-million-dollar lawsuits for the purpose of silencing them and dissuading others from speaking out in the future. SLAPP actions chill information and harm the public at large. Professors Pring and Canan catalogued a tsunami of SLAPP suits filed by public officials, real estate developers and businessmen against environmentalists, consumers, women’s rights advocates and more. SLAPPs are now seen in every region of the world as a means to intimidate people into silence and are viewed as a global affront to human rights. Anti-SLAPP laws are the antidote to SLAPP suits and while commonplace in the United States are only recently being considered in the EU and the UK. This researcher studied more than thirty years of Anti-SLAPP legislative policy in the U.S., the call for evidence and resultant EU Commission’s Anti-SLAPP Directive and Member States Recommendations, the call for evidence by the UK Ministry of Justice, response and Model Anti-SLAPP law presented to UK Parliament, as well as, conducted dozens of interviews with NGO’s throughout the EU, UK, and US to identify varying approaches to SLAPP lawsuits, public policy, and support for SLAPP victims. This paper identifies best practices taken from the US, EU and UK that can be implemented globally to help combat SLAPPs by: (1) raising awareness about SLAPPs, how to identify them, and recognizing habitual abusers of the court system; (2) engaging governments in the policy discussion in combatting SLAPPs and supporting SLAPP victims; (3) educating judges in recognizing SLAPPs an general training on encroachment of human rights; (4) and holding lawyers accountable for ravaging the rule of law.Keywords: Anti-SLAPP Laws and Policy, Comparative media law and policy, EU Anti-SLAPP Directive and Member Recommendations, International Human Rights of Freedom of Expression
Procedia PDF Downloads 68747 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)
Authors: Vinay Kumar Vanjakula, Frank Adam
Abstract:
The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour
Procedia PDF Downloads 169746 Port Miami in the Caribbean and Mesoamerica: Data, Spatial Networks and Trends
Authors: Richard Grant, Landolf Rhode-Barbarigos, Shouraseni Sen Roy, Lucas Brittan, Change Li, Aiden Rowe
Abstract:
Ports are critical for the US economy, connecting farmers, manufacturers, retailers, consumers and an array of transport and storage operators. Port facilities vary widely in terms of their productivity, footprint, specializations, and governance. In this context, Port Miami is considered as one of the busiest ports providing both cargo and cruise services in connecting the wider region of the Caribbean and Mesoamerica to the global networks. It is considered as the “Cruise Capital of the World and Global Gateway of the Americas” and “leading container port in Florida.” Furthermore, it has also been ranked as one of the top container ports in the world and the second most efficient port in North America. In this regard, Port Miami has made significant investments in the strategic and capital infrastructure of about US$1 billion, including increasing the channel depth and other onshore infrastructural enhancements. Therefore, this study involves a detailed analysis of Port Miami’s network, using publicly available multiple years of data about marine vessel traffic, cargo, and connectivity and performance indices from 2015-2021. Through the analysis of cargo and cruise vessels to and from Port Miami and its relative performance at the global scale from 2015 to 2021, this study examines the port’s long-term resilience and future growth potential. The main results of the analyses indicate that the top category for both inbound and outbound cargo is manufactured products and textiles. In addition, there are a lot of fresh fruits, vegetables, and produce for inbound and processed food for outbound cargo. Furthermore, the top ten port connections for Port Miami are all located in the Caribbean region, the Gulf of Mexico, and the Southeast USA. About half of the inbound cargo comes from Savannah, Saint Thomas, and Puerto Plata, while outbound cargo is from Puerto Corte, Freeport, and Kingston. Additionally, for cruise vessels, a significantly large number of vessels originate from Nassau, followed by Freeport. The number of passenger's vessels pre-COVID was almost 1,000 per year, which dropped substantially in 2020 and 2021 to around 300 vessels. Finally, the resilience and competitiveness of Port Miami were also assessed in terms of its network connectivity by examining the inbound and outbound maritime vessel traffic. It is noteworthy that the most frequent port connections for Port Miami were Freeport and Savannah, followed by Kingston, Nassau, and New Orleans. However, several of these ports, Puerto Corte, Veracruz, Puerto Plata, and Santo Thomas, have low resilience and are highly vulnerable, which needs to be taken into consideration for the long-term resilience of Port Miami in the future.Keywords: port, Miami, network, cargo, cruise
Procedia PDF Downloads 79745 Acute Neurophysiological Responses to Resistance Training; Evidence of a Shortened Super Compensation Cycle and Early Neural Adaptations
Authors: Christopher Latella, Ashlee M. Hendy, Dan Vander Westhuizen, Wei-Peng Teo
Abstract:
Introduction: Neural adaptations following resistance training interventions have been widely investigated, however the evidence regarding the mechanisms of early adaptation are less clear. Understanding neural responses from an acute resistance training session is pivotal in the prescription of frequency, intensity and volume in applied strength and conditioning practice. Therefore the primary aim of this study was to investigate the time course of neurophysiological mechanisms post training against current super compensation theory, and secondly, to examine whether these responses reflect neural adaptations observed with resistance training interventions. Methods: Participants (N=14) completed a randomised, counterbalanced crossover study comparing; control, strength and hypertrophy conditions. The strength condition involved 3 x 5RM leg extensions with 3min recovery, while the hypertrophy condition involved 3 x 12 RM with 60s recovery. Transcranial magnetic stimulation (TMS) and peripheral nerve stimulation were used to measure excitability of the central and peripheral neural pathways, and maximal voluntary contraction (MVC) to quantify strength changes. Measures were taken pre, immediately post, 10, 20 and 30 mins and 1, 2, 6, 24, 48, 72 and 96 hrs following training. Results: Significant decreases were observed at post, 10, 20, 30 min, 1 and 2 hrs for both training groups compared to control group for force, (p <.05), maximal compound wave; (p < .005), silent period; (p < .05). A significant increase in corticospinal excitability; (p < .005) was observed for both groups. Corticospinal excitability between strength and hypertrophy groups was near significance, with a large effect (η2= .202). All measures returned to baseline within 6 hrs post training. Discussion: Neurophysiological mechanisms appear to be significantly altered in the period 2 hrs post training, returning to homeostasis by 6 hrs. The evidence suggests that the time course of neural recovery post resistance training occurs 18-40 hours shorter than previous super compensation models. Strength and hypertrophy protocols showed similar response profiles with current findings suggesting greater post training corticospinal drive from hypertrophy training, despite previous evidence that strength training requires greater neural input. The increase in corticospinal drive and decrease inl inhibition appear to be a compensatory mechanism for decreases in peripheral nerve excitability and maximal voluntary force output. The changes in corticospinal excitability and inhibition are akin to adaptive processes observed with training interventions of 4 wks or longer. It appears that the 2 hr recovery period post training is the most influential for priming further neural adaptations with resistance training. Secondly, the frequency of prescribed resistance sessions can be scheduled closer than previous super compensation theory for optimal strength gains.Keywords: neural responses, resistance training, super compensation, transcranial magnetic stimulation
Procedia PDF Downloads 283