Search results for: computational form finding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9786

Search results for: computational form finding

666 Towards a Strategic Framework for State-Level Epistemological Functions

Authors: Mark Darius Juszczak

Abstract:

While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.

Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program

Procedia PDF Downloads 67
665 Biodegradable Self-Supporting Nanofiber Membranes Prepared by Centrifugal Spinning

Authors: Milos Beran, Josef Drahorad, Ondrej Vltavsky, Martin Fronek, Jiri Sova

Abstract:

While most nanofibers are produced using electrospinning, this technique suffers from several drawbacks, such as the requirement for specialized equipment, high electrical potential, and electrically conductive targets. Consequently, recent years have seen the increasing emergence of novel strategies in generating nanofibers in a larger scale and higher throughput manner. The centrifugal spinning is simple, cheap and highly productive technology for nanofiber production. In principle, the drawing of solution filament into nanofibers using centrifugal spinning is achieved through the controlled manipulation of centrifugal force, viscoelasticity, and mass transfer characteristics of the spinning solutions. Engineering efforts of researches of the Food research institute Prague and the Czech Technical University in the field the centrifugal nozzleless spinning led to introduction of a pilot plant demonstrator NANOCENT. The main advantages of the demonstrator are lower investment cost - thanks to simpler construction compared to widely used electrospinning equipments, higher production speed, new application possibilities and easy maintenance. The centrifugal nozzleless spinning is especially suitable to produce submicron fibers from polymeric solutions in highly volatile solvents, such as chloroform, DCM, THF, or acetone. To date, submicron fibers have been prepared from PS, PUR and biodegradable polyesters, such as PHB, PLA, PCL, or PBS. The products are in form of 3D structures or nanofiber membranes. Unique self-supporting nanofiber membranes were prepared from the biodegradable polyesters in different mixtures. The nanofiber membranes have been tested for different applications. Filtration efficiencies for water solutions and aerosols in air were evaluated. Different active inserts were added to the solutions before the spinning process, such as inorganic nanoparticles, organic precursors of metal oxides, antimicrobial and wound healing compounds or photocatalytic phthalocyanines. Sintering can be subsequently carried out to remove the polymeric material and transfer the organic precursors to metal oxides, such as Si02, or photocatalytic Zn02 and Ti02, to obtain inorganic nanofibers. Electrospinning is more suitable technology to produce membranes for the filtration applications than the centrifugal nozzleless spinning, because of the formation of more homogenous nanofiber layers and fibers with smaller diameters. The self-supporting nanofiber membranes prepared from the biodegradable polyesters are especially suitable for medical applications, such as wound or burn healing dressings or tissue engineering scaffolds. This work was supported by the research grants TH03020466 of the Technology Agency of the Czech Republic.

Keywords: polymeric nanofibers, self-supporting nanofiber membranes, biodegradable polyesters, active inserts

Procedia PDF Downloads 156
664 Basal Cell Carcinoma: Epidemiological Analysis of a 5-Year Period in a Brazilian City with a High Level of Solar Radiation

Authors: Maria E. V. Amarante, Carolina L. Cerdeira, Julia V. Cortes, Fiorita G. L. Mundim

Abstract:

Basal cell carcinoma (BCC) is the most prevalent type of skin cancer in humans. It arises from the basal cells of the epidermis and cutaneous appendages. The role of sunlight exposure as a risk factor for BCC is very well defined due to its power to influence genetic mutations, in addition to having a suppressor effect on the skin immune system. Despite showing low metastasis and mortality rates, the tumor is locally infiltrative, aggressive, and destructive. Considering the high prevalence rate of this carcinoma and the importance of early detection, a retrospective study was carried out in order to correlate the clinical data available on BBC, characterize it epidemiologically, and thus enable effective prevention measures for the population. Data on the period from January 2015 to December 2019 were collected from the medical records of patients registered at one pathology service located in the southeast region of Brazil, known as SVO, which delivers skin biopsy results. The study was aimed at correlating the variables, sex, age, and subtypes found. Data analysis was performed using the chi-square test at a nominal significance level of 5% in order to verify the independence between the variables of interest. Fisher's exact test was applied in cases where the absolute frequency in the cells of the contingency table was less than or equal to five. The statistical analysis was performed using the R® software. Ninety-three basal cell carcinoma were analyzed, and its frequency in the 31-to 45-year-old age group was 5.8 times higher in men than in women, whereas, from 46 to 59 years, the frequency was found 2.4 times higher in women than in men. Between the ages of 46 to 59 years, it should be noted that the sclerodermiform subtype appears more than the solid one, with a difference of 7.26 percentage points. Reversely, the solid form appears more frequently in individuals aged 60 years or more, with a difference of 8.57 percentage points. Among women, the frequency of the solid subtype was 9.93 percentage points higher than the sclerodermiform frequency. In males, the same percentage difference is observed, but sclerodermiform is the most prevalent subtype. It is concluded in this study that, in general, there is a predominance of basal cell carcinoma in females and in individuals aged 60 years and over, which demonstrates the tendency of this tumor. However, when rarely found in younger individuals, the male gender prevailed. The most prevalent subtype was the solid one. It is worth mentioning that the sclerodermiform subtype, which is more aggressive, was seen more frequently in males and in the 46-to 59-year-old range.

Keywords: basal cell carcinoma, epidemiology, sclerodermiform basal cell carcinoma, skin cancer, solar radiation, solid basal cell carcinoma

Procedia PDF Downloads 130
663 Semiotics of the New Commercial Music Paradigm

Authors: Mladen Milicevic

Abstract:

This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.

Keywords: music, semiology, commercial, taste

Procedia PDF Downloads 382
662 Coping with Geological Hazards during Construction of Hydroelectric Projects in Himalaya

Authors: B. D. Patni, Ashwani Jain, Arindom Chakraborty

Abstract:

The world’s highest mountain range has been forming since the collision of Indian Plate with Asian Plate 40-50 million years ago. The Indian subcontinent has been deeper and deeper in to the rest of Asia resulting upliftment of Himalaya & Tibetan Plateau. The complex domain has become a major challenge for construction of hydro electric projects. The Himalayas are geologically complex & seismically active. Shifting of Indian Plate northwardly and increasing the amount of stresses in the fragile domain which leads to deformation in the form of several fold, faults and upliftment. It is difficult to undergo extensive geological investigation to ascertain the geological problems to be encountered during construction. Inaccessibility of the terrain, high rock cover, unpredictable ground water condition etc. are the main constraints. The hydroelectric projects located in Himalayas have faced many geological and geo-hydrological problems while construction of surface and subsurface works. Based on the experience, efforts have been made to identify the expected geological problems during and after construction of the projects. These have been classified into surface and subsurface problems which include existence of inhomogeneous deep overburden in the river bed or buried valley, abrupt change in bed rock profile, Occurrences of fault zones/shear zones/fractured rock in dam foundation and slope instability in the abutments. The tunneling difficulties are many such as squeezing ground condition, popping, rock bursting, high temperature gradient, heavy ingress of water, existence of shear seams/shear zones and emission of obnoxious gases. However, these problems were mitigated by adopting suitable remedial measures as per site requirement. The support system includes shotcrete, wire mesh, rock bolts, steel ribs, fore-poling, pre-grouting, pipe-roofing, MAI anchors, toe wall, retaining walls, reinforced concrete dowels, drainage drifts, anchorage cum drainage shafts, soil nails, concrete cladding and shear keys. Controlled drilling & blasting, heading & benching, proper drainage network and ventilation system are other remedial measures adopted to overcome such adverse situations. The paper highlights the geological uncertainties and its remedial measures in Himalaya, based on the analysis and evaluation of 20 hydroelectric projects during construction.

Keywords: geological problems, shear seams, slope, drilling & blasting, shear zones

Procedia PDF Downloads 393
661 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 115
660 Utilizing Fly Ash Cenosphere and Aerogel for Lightweight Thermal Insulating Cement-Based Composites

Authors: Asad Hanif, Pavithra Parthasarathy, Zongjin Li

Abstract:

Thermal insulating composites help to reduce the total power consumption in a building by creating a barrier between external and internal environment. Such composites can be used in the roofing tiles or wall panels for exterior surfaces. This study purposes to develop lightweight cement-based composites for thermal insulating applications. Waste materials like silica fume (an industrial by-product) and fly ash cenosphere (FAC) (hollow micro-spherical shells obtained as a waste residue from coal fired power plants) were used as partial replacement of cement and lightweight filler, respectively. Moreover, aerogel, a nano-porous material made of silica, was also used in different dosages for improved thermal insulating behavior, while poly vinyl alcohol (PVA) fibers were added for enhanced toughness. The raw materials including binders and fillers were characterized by X-Ray Diffraction (XRD), X-Ray Fluorescence spectroscopy (XRF), and Brunauer–Emmett–Teller (BET) analysis techniques in which various physical and chemical properties of the raw materials were evaluated like specific surface area, chemical composition (oxide form), and pore size distribution (if any). Ultra-lightweight cementitious composites were developed by varying the amounts of FAC and aerogel with 28-day unit weight ranging from 1551.28 kg/m3 to 1027.85 kg/m3. Excellent mechanical and thermal insulating properties of the resulting composites were obtained ranging from 53.62 MPa to 8.66 MPa compressive strength, 9.77 MPa to 3.98 MPa flexural strength, and 0.3025 W/m-K to 0.2009 W/m-K as thermal conductivity coefficient (QTM-500). The composites were also tested for peak temperature difference between outer and inner surfaces when subjected to heating (in a specially designed experimental set-up) by a 275W infrared lamp. The temperature difference up to 16.78 oC was achieved, which indicated outstanding properties of the developed composites to act as a thermal barrier for building envelopes. Microstructural studies were carried out by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS) for characterizing the inner structure of the composite specimen. Also, the hydration products were quantified using the surface area mapping and line scale technique in EDS. The microstructural analyses indicated excellent bonding of FAC and aerogel in the cementitious system. Also, selective reactivity of FAC was ascertained from the SEM imagery where the partially consumed FAC shells were observed. All in all, the lightweight fillers, FAC, and aerogel helped to produce the lightweight composites due to their physical characteristics, while exceptional mechanical properties, owing to FAC partial reactivity, were achieved.

Keywords: aerogel, cement-based, composite, fly ash cenosphere, lightweight, sustainable development, thermal conductivity

Procedia PDF Downloads 213
659 Prevalence of Work-Related Musculoskeletal Disorder among Dental Personnel in Perak

Authors: Nursyafiq Ali Shibramulisi, Nor Farah Fauzi, Nur Azniza Zawin Anuar, Nurul Atikah Azmi, Janice Hew Pei Fang

Abstract:

Background: Work related musculoskeletal disorders (WRMD) among dental personnel have been underestimated and under-reported worldwide and specifically in Malaysia. The problem will arise and progress slowly over time, as it results from accumulated injury throughout the period of work. Several risk factors, such as repetitive movement, static posture, vibration, and adapting poor working postures, have been identified to be contributing to WRMSD in dental practices. Dental personnel is at higher risk of getting this problem as it is their working nature and core business. This would cause pain and dysfunction syndrome among them and result in absence from work and substandard services to their patients. Methodology: A cross-sectional study involving 19 government dental clinics in Perak was done over the period of 3 months. Those who met the criteria were selected to participate in this study. Malay version of the Self-Reported Nordic Musculoskeletal Discomfort Form was used to identify the prevalence of WRMSD, while the intensity of pain in the respective regions was evaluated using a 10-point scale according to ‘Pain as The 5ᵗʰ Vital Sign’ by MOH Malaysia and later on were analyzed using SPSS version 25. Descriptive statistics, including mean and SD and median and IQR, were used for numerical data. Categorical data were described by percentage. Pearson’s Chi-Square Test and Spearman’s Correlation were used to find the association between the prevalence of WRMSD and other socio-demographic data. Results: 159 dentists, 73 dental therapists, 26 dental lab technicians, 81 dental surgery assistants, and 23 dental attendants participated in this study. The mean age for the participants was 34.9±7.4 and their mean years of service was 9.97±7.5. Most of them were female (78.5%), Malay (71.3%), married (69.6%) and right-handed (90.1%). The highest prevalence of WRMSD was neck (58.0%), followed by shoulder (48.1%), upper back (42.0%), lower back (40.6%), hand/wrist (31.5%), feet (21.3%), knee (12.2%), thigh 7.7%) and lastly elbow (6.9%). Most of those who reported having neck pain scaled their pain experiences at 2 out of 10 (19.5%), while for those who suffered upper back discomfort, most of them scaled their pain experience at 6 out of 10 (17.8%). It was found that there was a significant relationship between age and pain at neck (p=0.007), elbow (p=0.027), lower back (p=0.032), thigh (p=0.039), knee (p=0.001) and feet (p=0.000) regions. Job position also had been found to be having a significant relationship with pain experienced at the lower back (p=0.018), thigh (p=0.011), knee, and feet (p=0.000). Conclusion: The prevalence of WRMSD among dental personnel in Perak was found to be high. Age and job position were found to be having a significant relationship with pain experienced in several regions. Intervention programs should be planned and conducted to prevent and reduce the occurrence of WRMSD, as all harmful or unergonomic practices should be avoided at all costs.

Keywords: WRMSD, ergonomic, dentistry, dental

Procedia PDF Downloads 76
658 Neighborhood Sustainability Assessment Tools: A Conceptual Framework for Their Use in Building Adaptive Capacity to Climate Change

Authors: Sally Naji, Julie Gwilliam

Abstract:

Climate change remains a challenging matter for the human and the built environment in the 21st century, where the need to consider adaptation to climate change in the development process is paramount. However, there remains a lack of information regarding how we should prepare responses to this issue, such as through developing organized and sophisticated tools enabling the adaptation process. This study aims to build a systematic framework approach to investigate the potentials that Neighborhood Sustainability Assessment tools (NSA) might offer in enabling both the analysis of the emerging adaptive capacity to climate change. The analysis of the framework presented in this paper aims to discuss this issue in three main phases. The first part attempts to link sustainability and climate change, in the context of adaptive capacity. It is argued that in deciding to promote sustainability in the context of climate change, both the resilience and vulnerability processes become central. However, there is still a gap in the current literature regarding how the sustainable development process can respond to climate change. As well as how the resilience of practical strategies might be evaluated. It is suggested that the integration of the sustainability assessment processes with both the resilience thinking process, and vulnerability might provide important components for addressing the adaptive capacity to climate change. A critical review of existing literature is presented illustrating the current lack of work in this field, integrating these three concepts in the context of addressing the adaptive capacity to climate change. The second part aims to identify the most appropriate scale at which to address the built environment for the climate change adaptation. It is suggested that the neighborhood scale can be considered as more suitable than either the building or urban scales. It then presents the example of NSAs, and discusses the need to explore their potential role in promoting the adaptive capacity to climate change. The third part of the framework presents a comparison among three example NSAs, BREEAM Communities, LEED-ND, and CASBEE-UD. These three tools have been selected as the most developed and comprehensive assessment tools that are currently available for the neighborhood scale. This study concludes that NSAs are likely to present the basis for an organized framework to address the practical process for analyzing and yet promoting Adaptive Capacity to Climate Change. It is further argued that vulnerability (exposure & sensitivity) and resilience (Interdependence & Recovery) form essential aspects to be addressed in the future assessment of NSA’s capability to adapt to both short and long term climate change impacts. Finally, it is acknowledged that further work is now required to understand impact assessment in terms of the range of physical sectors (Water, Energy, Transportation, Building, Land Use and Ecosystems), Actor and stakeholder engagement as well as a detailed evaluation of the NSA indicators, together with a barriers diagnosis process.

Keywords: adaptive capacity, climate change, NSA tools, resilience, sustainability

Procedia PDF Downloads 369
657 Lineament Analysis as a Method of Mineral Deposit Exploration

Authors: Dmitry Kukushkin

Abstract:

Lineaments form complex grids on Earth's surface. Currently, one particular object of study for many researchers is the analysis and geological interpretation of maps of lineament density in an attempt to locate various geological structures. But lineament grids are made up of global, regional and local components, and this superimposition of lineament grids of various scales (global, regional, and local) renders this method less effective. Besides, the erosion processes and the erosional resistance of rocks lying on the surface play a significant role in the formation of lineament grids. Therefore, specific lineament density map is characterized by poor contrast (most anomalies do not exceed the average values by more than 30%) and unstable relation with local geological structures. Our method allows to confidently determine the location and boundaries of local geological structures that are likely to contain mineral deposits. Maps of the fields of lineament distortion (residual specific density) created by our method are characterized by high contrast with anomalies exceeding the average by upward of 200%, and stable correlation to local geological structures containing mineral deposits. Our method considers a lineament grid as a general lineaments field – surface manifestation of stress and strain fields of Earth associated with geological structures of global, regional and local scales. Each of these structures has its own field of brittle dislocations that appears on the surface of its lineament field. Our method allows singling out local components by suppressing global and regional components of the general lineaments field. The remaining local lineament field is an indicator of local geological structures.The following are some of the examples of the method application: 1. Srednevilyuiskoye gas condensate field (Yakutia) - a direct proof of the effectiveness of methodology; 2. Structure of Astronomy (Taimyr) - confirmed by the seismic survey; 3. Active gold mine of Kadara (Chita Region) – confirmed by geochemistry; 4. Active gold mine of Davenda (Yakutia) - determined the boundaries of the granite massif that controls mineralization; 5. Object, promising to search for hydrocarbons in the north of Algeria - correlated with the results of geological, geochemical and geophysical surveys. For both Kadara and Davenda, the method demonstrated that the intensive anomalies of the local lineament fields are consistent with the geochemical anomalies and indicate the presence of the gold content at commercial levels. Our method of suppression of global and regional components results in isolating a local lineament field. In early stages of a geological exploration for oil and gas, this allows determining boundaries of various geological structures with very high reliability. Therefore, our method allows optimization of placement of seismic profile and exploratory drilling equipment, and this leads to a reduction of costs of prospecting and exploration of deposits, as well as acceleration of its commissioning.

Keywords: lineaments, mineral exploration, oil and gas, remote sensing

Procedia PDF Downloads 290
656 Minding the Gap: Consumer Contracts in the Age of Online Information Flow

Authors: Samuel I. Becher, Tal Z. Zarsky

Abstract:

The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.

Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior

Procedia PDF Downloads 187
655 Assessment of Dietary Patterns of Saudi Patients with Type 2 Diabetes Mellitus in Ramadan and Non-Ramadan Periods

Authors: Abdullah S. Alghamdi, Khaled Alghamdi, Richard O. Jenkins, Parvez I. Haris

Abstract:

Background: Unhealthy diet is one of the modifiable risk factors for developing type 2 diabetes mellitus (T2DM). Improvement in diet can be beneficial for countering diabetes. For example, HbA1c, an important biomarker for diabetes, can be reduced by 1.1% through only alteration in diet. Ramadan fasting has been reported to provide positive health benefits. However, optimal benefits are not achieved, often due to poor dietary habits and lifestyle. There is a need to better understand the dietary habits of people fasting during Ramadan, so that necessary improvements can be made to develop this form of fasting as a non-pharmacological strategy for management and prevention of T2DM. Aim: This study aimed to assess the dietary patterns of Saudi adult patients with T2DM over three different periods (before, during, and after Ramadan) and relate this to HbA1c levels. Methods: This study recruited 82 Saudi with T2DM, who chose to fast during Ramadan, from the Endocrine and Diabetic Centre of Al Iman General Hospital, Riyadh, Saudi Arabia. Ethical approvals for the study were obtained from De Montfort University and Saudi Ministry of Health. Dietary patterns were assessed by a self-administered questionnaire in each period. This assessment included the diet type and frequency. Blood samples were collected in each period for determination of HbA1c. Results: The number of meals per day for the participants significantly decreased during Ramadan (P < 0.001). The consumption of fruit and vegetables significantly increased during Ramadan (P = 0.017). However, the consumption of sugary drinks significantly increased during and after Ramadan (P = 0.005). Approximately 60% of the patients indicated that they ate sugary foods at least once per week. The consumption of bread and rice was reported to be at least two times per week. The consumption of rice significantly reduced during Ramadan (P = 0.002). The mean HbA1c significantly varied between periods (P = 0.001), with lowest level during Ramadan compared to before and after Ramadan. The increase in the consumption of fruits and vegetables had a medium effect size on the reduction in HbA1c during Ramadan. There was a variance of 7.7% in the mean difference in HbA1c levels between groups (who changed their fruit and vegetable consumption) which can be accounted for by the increase in the consumption of fruits and vegetables. Likewise, 9.3% of the variance in the mean HbA1c difference between the groups was accounted for by a decrease in the consumption of rice. Conclusion: The increase in the frequency of fruit and vegetables intake, and especially the reduction in the frequency of rice consumption, during Ramadan produce beneficial effects in reducing HbA1c level. Therefore, further improving the dietary habits of patients with T2DM, such as reducing their sugary drinks intake, may help them to obtain greater benefits from Ramadan fasting in the management of their diabetes. It is recommended that dietary guidance is provided to the public to maximise health benefits through Ramadan fasting.

Keywords: Diabetes, Diet, Fasting, HbA1c, Ramadan

Procedia PDF Downloads 150
654 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 160
653 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes

Authors: Mohsen Hababalahi, Morteza Bastami

Abstract:

Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.

Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method

Procedia PDF Downloads 501
652 Integration of Icf Walls as Diurnal Solar Thermal Storage with Microchannel Solar Assisted Heat Pump for Space Heating and Domestic Hot Water Production

Authors: Mohammad Emamjome Kashan, Alan S. Fung

Abstract:

In Canada, more than 32% of the total energy demand is related to the building sector. Therefore, there is a great opportunity for Greenhouse Gases (GHG) reduction by integrating solar collectors to provide building heating load and domestic hot water (DHW). Despite the cold winter weather, Canada has a good number of sunny and clear days that can be considered for diurnal solar thermal energy storage. Due to the energy mismatch between building heating load and solar irradiation availability, relatively big storage tanks are usually needed to store solar thermal energy during the daytime and then use it at night. On the other hand, water tanks occupy huge space, especially in big cities, space is relatively expensive. This project investigates the possibility of using a specific building construction material (ICF – Insulated Concrete Form) as diurnal solar thermal energy storage that is integrated with a heat pump and microchannel solar thermal collector (MCST). Not much literature has studied the application of building pre-existing walls as active solar thermal energy storage as a feasible and industrialized solution for the solar thermal mismatch. By using ICF walls that are integrated into the building envelope, instead of big storage tanks, excess solar energy can be stored in the concrete of the ICF wall that consists of EPS insulation layers on both sides to store the thermal energy. In this study, two solar-based systems are designed and simulated inTransient Systems Simulation Program(TRNSYS)to compare ICF wall thermal storage benefits over the system without ICF walls. In this study, the heating load and DHW of a Canadian single-family house located in London, Ontario, are provided by solar-based systems. The proposed system integrates the MCST collector, a water-to-water HP, a preheat tank, the main tank, fan coils (to deliver the building heating load), and ICF walls. During the day, excess solar energy is stored in the ICF walls (charging cycle). Thermal energy can be restored from the ICF walls when the preheat tank temperature drops below the ICF wall (discharging process) to increase the COP of the heat pump. The evaporator of the heat pump is taking is coupled with the preheat tank. The provided warm water by the heat pump is stored in the second tank. Fan coil units are in contact with the tank to provide a building heating load. DHW is also delivered is provided from the main tank. It is investigated that the system with ICF walls with an average solar fraction of 82%- 88% can cover the whole heating demand+DHW of nine months and has a 10-15% higher average solar fraction than the system without ICF walls. Sensitivity analysis for different parameters influencing the solar fraction is discussed in detail.

Keywords: net-zero building, renewable energy, solar thermal storage, microchannel solar thermal collector

Procedia PDF Downloads 112
651 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 245
650 Modulation of Receptor-Activation Due to Hydrogen Bond Formation

Authors: Sourav Ray, Christoph Stein, Marcus Weber

Abstract:

A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.

Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation

Procedia PDF Downloads 166
649 An Exploration of the Experiences of Women in Polygamous Marriages: A Case Study of Matizha Village, Masvingo, Zimbabwe

Authors: Flora Takayindisa, Tsoaledi Thobejane, Thizwilondi Mudau

Abstract:

This study highlights what people in polygamous marriages face on a daily basis. It argues that there are more disadvantages for women in polygamous marriages than their counterparts in monogamous relationships. The study further suggests that the patriarchal power structure seems to take a powerful and effective role on polygamous marriages in our societies, particularly in Zimbabwe where this study took place. The study explored the intricacies of polygamous marriages and how these dominances can be resolved. The research is therefore presented through the ‘lived realities’ of the affected women in polygamous marriages in Gutu District located in Masvingo Province of Zimbabwe. Polygamous marriages are practised in different societies. Some women who are practising a polygamous lifestyle are emotionally and physically abused in their relationships. Evidence also suggests children from polygamous marriages also suffer psychologically when their fathers take other wives. Relationships within the family are very difficult because of the husband’s seeming favouritism for one wife. Children are mostly affected by disputes between co-wives and they often lack quality time with their fathers. There are mixed feelings about polygamous marriages. There are some people who condemn it saying inhumane. However, considerations must be made of what it might mean to other people who do not have choices of any other form of marriage. The other factor that has to be noted is that polygamous marriages are not always negative. There are some positive things that result from the polygamous marriages. The study was conducted in a village called Matizha. In the study, a qualitative research approach was employed to stimulate awareness of the social, cultural, religious and the effect of economic factors in polygamous marriages. This approach facilitates a unique understanding of the experiences of women in polygamous marriages, their experiences being both negative and positive. The qualitative type of research method enabled the respondents to have an open minded when they were being asked questions. The researcher utilised the feminist theory in the study. The researcher employed guide interviews to acquire information from the participants. The chapter focuses on the participants who took part in the study, how the participants were selected, ethical considerations, data collection, the interview process, the research instruments and the summary. The data was obtained using a guided interview for all the respondents ranging from all ages who are in polygamous marriages. The researcher presented the demographic information of the participants. Thereafter, the researcher presented other aspects of the data collection like social factors, economic factors and also religious affiliation. The conclusions and recommendations are made from the four main themes that came up from the discussions. The recommendations were made for the women, the policies and laws affecting women, and finally, recommendations for future research. It is believed that the overall objectives of the study have been met and research questions have been answered based on the findings of the study discussed.

Keywords: co-wives, egalitarianism, experiences, polyandry, polygamy, woman

Procedia PDF Downloads 245
648 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context

Authors: Andrea Fiorista

Abstract:

The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.

Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL

Procedia PDF Downloads 78
647 Attitudes of Nursing Students Towards Caring Nurse-Patient Interaction

Authors: Şefika Dilek Güven, Gülden Küçükakça

Abstract:

Objective: Learning the process of interaction with patient occurs within the process of nursing education. For this reason, it is considered to provide an opportunity for questioning and rearrangement of nursing education programs by assessing attitudes of nursing students towards caring nurse-patient interaction. Method: This is a descriptive study conducted in order to assess attitudes of nursing students towards caring nurse-patient interaction. The study was conducted with 318 students who were studying at nursing department of Semra and Vefa Küçük Health High School, Nevşehir Hacı Bektaş Veli University in 2015-2016 academic year and agreed to participate in the study. “Personal Information Form” prepared by the researchers utilizing the literature and “Caring Nurse-Patient Interaction Scale (CNPIS)”, who Turkish validity and reliability were conducted by Atar and Aştı, were used in the study. The Cronbach α coefficient of CNPIS was found as 0.973 in the study. Permissions of the institution and participants were received before starting to conduct study. Significance test of the difference between two means, analysis of variance, and correlation analysis were used to assess the data. Results: Average age of nursing students participating in the study was 20.72±1.91 and 74.8% were female, and 28.0% were the fourth-year students. 52.5% of the nursing students stated that they chose nursing profession willingly, 80.2% did not have difficulty in their interactions with patients, and 84.6% did not have difficulty in their social relationships. CNPIS total mean score of nursing students was found to be 295.31±40.95. When the correlation between total CNPIS mean score of the nursing students in terms of some variables was examined; it was determined there was a significant positive correlation between ages of the nursing students and total mean score of CNPIS (r=0.184, p=0.001). CNPIS total mean score was found to be higher in female students compared to male students, in 3rd–year students compared to students studying at other years, in those choosing their profession willingly compared to those choosing their profession unwillingly, in those not having difficulty in relations with the patients compared to those having difficulty, and in those not having difficulty in social relationships compared to those having difficulty. It was determined there was a significant difference between CNPIS total mean scores in terms of the year and state of having difficulty in social relationships (p<0,005). Conclusion: Nursing students had positive attitudes towards caring nurse-patient interactions, attitudes of nursing students, who were female, studying at 3rd year, chose nursing profession willingly, did not have difficulty in patient relations, and did not have difficulty in social relationships, towards caring nurse-patient interaction were found to be more positive. In the line with these results; it can be recommended to organize activities for introducing nursing profession to the youth preparing for the university, to use methods that will increase further communication skills to nursing students during their education, to support students in terms of communication skills, and to involve activities that will strengthen their social relationships.

Keywords: nurse-patient interaction, nursing student, patient, communication

Procedia PDF Downloads 212
646 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 262
645 The Relationship between Basic Human Needs and Opportunity Based on Social Progress Index

Authors: Ebru Ozgur Guler, Huseyin Guler, Sera Sanli

Abstract:

Social Progress Index (SPI) whose fundamentals have been thrown in the World Economy Forum is an index which aims to form a systematic basis for guiding strategy for inclusive growth which requires achieving both economic and social progress. In this research, it has been aimed to determine the relations among “Basic Human Needs” (BHN) (including four variables of ‘Nutrition and Basic Medical Care’, ‘Water and Sanitation’, ‘Shelter’ and ‘Personal Safety’) and “Opportunity” (OPT) (that is composed of ‘Personal Rights’, ‘Personal Freedom and Choice’, ‘Tolerance and Inclusion’, and ‘Access to Advanced Education’ components) dimensions of 2016 SPI for 138 countries which take place in the website of Social Progress Imperative by carrying out canonical correlation analysis (CCA) which is a data reduction technique that operates in a way to maximize the correlation between two variable sets. In the interpretation of results, the first pair of canonical variates pointing to the highest canonical correlation has been taken into account. The first canonical correlation coefficient has been found as 0.880 indicating to the high relationship between BHN and OPT variable sets. Wilk’s Lambda statistic has revealed that an overall effect of 0.809 is highly large for the full model in order to be counted as statistically significant (with a p-value of 0.000). According to the standardized canonical coefficients, the largest contribution to BHN set of variables has come from ‘shelter’ variable. The most effective variable in OPT set has been detected to be ‘access to advanced education’. Findings based on canonical loadings have also confirmed these results with respect to the contributions to the first canonical variates. When canonical cross loadings (structure coefficients) are examined, for the first pair of canonical variates, the largest contributions have been provided by ‘shelter’ and ‘access to advanced education’ variables. Since the signs for structure coefficients have been found to be negative for all variables; all OPT set of variables are positively related to all of the BHN set of variables. In case canonical communality coefficients which are the sum of the squares of structure coefficients across all interpretable functions are taken as the basis; amongst all variables, ‘personal rights’ and ‘tolerance and inclusion’ variables can be said not to be useful in the model with 0.318721 and 0.341722 coefficients respectively. On the other hand, while redundancy index for BHN set has been found to be 0.615; OPT set has a lower redundancy index with 0.475. High redundancy implies high ability for predictability. The proportion of the total variation in BHN set of variables that is explained by all of the opposite canonical variates has been calculated as 63% and finally, the proportion of the total variation in OPT set that is explained by all of the canonical variables in BHN set has been determined as 50.4% and a large part of this proportion belongs to the first pair. The results suggest that there is a high and statistically significant relationship between BHN and OPT. This relationship is generally accounted by ‘shelter’ and ‘access to advanced education’.

Keywords: canonical communality coefficient, canonical correlation analysis, redundancy index, social progress index

Procedia PDF Downloads 208
644 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing

Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang

Abstract:

Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.

Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture

Procedia PDF Downloads 129
643 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 208
642 From Mimetic to Mnemonic: On the Simultaneous Rise of Language and Religion

Authors: Dmitry Usenco

Abstract:

The greatest paradox about the origin of language is the fact that, while language is always taught by adults to children, it can never be learnt properly unless its acquisition occurs during childhood. The question that naturally arises in that respect is as follows: How could language be taught for the first time by a non-speaker, i.e., by someone who did not have the opportunity to master it as a child? Yet the above paradox will appear less unresolvable if we hypothesise that language was originally introduced not as a means of communication but as a relatively modest training/playing technique that was used to develop the learners’ mimetic skills. Its communicative and expressive properties could have been discovered and exploited later – upon the learners’ reaching their adulthood. The importance of mimesis in children’s development is universally recognised. The most common forms of it are onomatopoeia and mime, which consist in reproducing sounds and imitating shapes/movements of externally observed objects. However, in some cases, neither of these exercises can be adequate to the task. An object, especially an inanimate one, may emit no characteristic sounds, making onomatopoeia problematic. In other cases, it may have no easily reproduceable shape, while its movements may depend on the specific way of our interacting with it. On such occasions, onomatopoeia and mime can perhaps be supplemented, or even replaced, by movements of the tongue which can metonymically represent certain aspects of our interaction with the object. This is especially evident with consonants: e.g., a fricative sound can designate the subject’s relatively slow approach to the object or vice versa, while a plosive one can express the relatively abrupt process of grabbing/sticking or parrying/bouncing. From that point of view, a protoword can be regarded as a sophisticated gesture of the tongue but also as a mnemonic sequence that contains encoded instructions about the way to handle the object. When this originally subjective link between the object and its mimetic/mnemonic representation eventually installs itself in the collective mind (however small at first the community might be), the initially nameless object acquires a name, and the first word is created. (Discussing the difference between proper and common names is out of the scope of this paper). In its very beginning, this word has two major applications. It can be used for interhuman communication because it allows us to invoke the presence of a currently absent object. It can also be used for designing, expressing, and memorising our interaction with the object itself. The first usage gives rise to language, the second to religion. By the act of naming, we attach to the object a mental (‘spiritual’) dimension which has an independent existence in our collective mind. By referring to the name (idea/demon/soul) of the object, we perform our first act of spirituality, our first religious observance. This is the beginning of animism – arguably, the most ancient form of religion. To conclude: the rise of religion is simultaneous with the the emergence of language in human evolution.

Keywords: language, religion, origin, acquisition, childhood, adulthood, play, represntation, onomatopoeia, mime, gesture, consonant, simultaneity, spirituality, animism

Procedia PDF Downloads 64
641 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 155
640 Environmental and Formal Conditions for the Development of Blue-green Infrastructure (BGI) in the Cities of Central Europe on the Example of Poland

Authors: Magdalena Biela, Marta Weber-Siwirska, Edyta Sierka

Abstract:

The current noticed trend in Central European countries, as in other regions of the world, is for people to migrate to cities. As a result, the urban population is to have reached 70% of the total by 2050. Due to this tendency, as well as taking high real estate prices and limited reserves of city green areas into consideration, the greenery and agricultural soil adjacent to cities is are to be devoted to housing projects, while city centres are expected to undergo partial depopulation. Urban heat islands and phenomena such as torrential rains may cause serious damage. They may even endanger the very life and health of the inhabitants. Due to these tangible effects of climate change, residents expect that local government takes action to develop green infrastructure (GI). The main purpose of our research has been to assess the degree of readiness on the part of the local government in Poland to develop BGI. A questionnaire using the CAWI method was prepared, and a survey was carried out. The target group were town hall employees in all 380 powiat cities and towns (380 county centres) in Poland. The form contained 14 questions covering, among others, actions taken to support the development of GI and ways of motivating residents to take such actions. 224 respondents replied to the questions. The results of the research show that 52% of the cities/towns have taken or intend to take measures to favour the development of green spaces. Currently, the installation of green roofs and living walls is are only carried out by 6 Polish cities, and a few more are at the stage of preparing appropriate regulations. The problem of rainwater retention is much more widespread. Among the municipalities declaring any activities for the benefit of GI, approximately 42% have decided to work on this problem. Over 19% of the respondents are planning an increase in the surface occupied by green areas, 14% - the installation of green roofs, and 12% - redevelopment of city greenery. It is optimistic that 67% of the respondents are willing to acquire knowledge about BGI by means of taking part in educational activities both at the national and international levels. There are many ways to help GI development. The most common type of support in the cities and towns surveyed is co-financing (35%), followed by full financing of projects (11%). About 15% of the cities declare only advisory support. Thus, the problem of GI in Central European cities is at the stage of initial development and requires advanced measures and implementation of both proven solutions applied in other European and world countries using the concept of Nature-based Solutions.

Keywords: city/town, blue-green infrastructure, green roofs, climate change adaptation

Procedia PDF Downloads 196
639 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 380
638 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language

Authors: Zhicheng Han

Abstract:

This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.

Keywords: formulaic language, working memory, phonological short-term memory, academic language

Procedia PDF Downloads 48
637 Honneth, Feenberg, and the Redemption of Critical Theory of Technology

Authors: David Schafer

Abstract:

Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.

Keywords: Habermas, Honneth, technology, Feenberg

Procedia PDF Downloads 185